Ready to learn Analytics in Healthcare? Browse Analytics in Healthcare & Life Sciences courses developed by industry thought leaders and Experfy in Harvard Innovation Lab.
When it comes to investing in big data, probably, no other industry has as much at stake as pharmaceutics. Big data not only provides the foundation for research and the discovery of new drugs but also helps patients and caregivers make better decisions. Predictive data modeling combined with rich data visualization can substantially decrease the expenses on drug discovery and facilitate decision-making in the pharmaceutical industry and in healthcare in general.
Decreasing the Cost of Drug Discovery
Drug discovery can take immense amounts of time and resources, which contributes to the expense of many drugs and even their inability to make it to the market in the first place. Drugs used to fight Amyotrophic Lateral Sclerosis (ALS), for example, aren’t being developed despite the 140,000 new cases of ALS being diagnosed each year worldwide. This is because the costs of developing the drugs far outweigh demand, which means that such investment wouldn’t pay off.
Big data and machine learning can be essential in lowering the cost of drug discovery, moving the experiment from clinical researchers to a combination of AI, complex software, and powerful computers to minimize the time needed for clinical trials. This, in turn, would drastically decrease the amount of research necessary, lowering the costs significantly for manufacturers and, as a result, patients.
Drug-protein interactions, an area that has remained a mystery for some time, is another example of how big data can be applied to the pharmaceutical industry. Solving protein structures is time- and resource-intensive due to the sheer number of protein structures that exist and their different effects when combined with different drugs. In addition to testing for the effectiveness of a new drug, researchers must test to ensure the drug does not harm patients. These multiple factors create research that becomes extremely time-consuming and impractical in light of available resources.
At Carnegie Mellon University, a machine-learning experiment was developed that would analyze the results of different drug interactions with proteins. Without computer modeling, prediction of protein structures is quite difficult, as there are almost 15,000 protein families in the database. The data was divided into several sets, each with different levels of understanding of their protein structure.
Of the total 9,216 experiments, about a third were completed through the automated system and the results predicted by the machine. The results of this segment had an accuracy rate of 92% and scientists were able to be receive them far faster than if it had been conducted in a clinical laboratory. Since the accuracy of the results combined with the automation saves valuable manual research time, these machine learning techniques can be applied to a variety of medical tests that would ultimately help to lower costs associated with drug research.
Making Critical Drug Development Decisions
Big data not only helps decrease drug discovery and manufacturing costs, it can also be used to help government health agencies, payers, and providers make important decisions related to drug discovery, even as far as deciding which drugs should be developed.
One leading biotechnology company, Genentech, foresaw the advantages of big data and invested in building a powerful big data infrastructure. Data scientists working in the pharmaceutical industry face a challenge of gathering data that gives them answers to important research questions – those same questions that are aimed at supporting drug research and development. It is only once they have this type of data that they can use it to gain a better understanding of the many possible patient treatments in various patients. They can use this information to develop a better individual treatment for patients as well as apply what they’ve learned to future drug development decisions.
The National Cancer Institute gathered 4.1 petabytes (1 petabyte equals 1 million gigabytes) of data from 14,000 anonymized patient cases. The Genomic Data Commons (GDC) project is aimed at enabling the sharing of data across genomic studies in support of precision medicine. A large amount of data is necessary to understand the exact genomic signature needed for the right combination of drugs to fight the tumor.
The GDC is the largest repository for cancer data in the world, delivering data in a digestible form to the research community at large. The project also enables others to build data commons that can scale as large as the GDC. One project, BloodPAC, built a data commons for liquid biopsies. The idea is that when patients are treated, their blood will be drawn and the cell DNA in their blood will help researchers understand their reaction to the chemo and whether or not their treatment needs to change. The data may also shed light on how tumors grow if certain tumors are similar, and the different genes or proteins that should be targeted – all information needed to develop new drugs for each specific situation.
A Bright Future Ahead
Whether it’s an application for precision medicine, decreasing the failure rates in drug trials, or lowering the cost of research and developing better medicine, big data has a bright future for the pharmaceutical industry. The ability for researchers all over the world to contribute their own data sets to huge projects is also key. For better and worse, humans are creating more and more data every day, which means that as time goes on these projects will gather more insights, leading to better and greater applications in the field of medicine for everyone. It will be exciting to see what the future will bring.
Initially published in insideBIGDATA