How Machine Learning Can Unlock Cures For Tomorrow’s Diseases
In February 2021, NASA officials and people around the world celebrated the wildly successful landing of the Mars Perseverance rover. We all applauded the scientists and engineers responsible for this incredible achievement, which built on 80 years of multidisciplinary advances in physics, chemistry, mechanical engineering and materials science — and previous achievements such as the moon landing.
In the field of AI-driven drug discovery, we are still seeking our own proverbial moon landing — namely, understanding and curing a major disease in a data-driven way.
Succeeding in AI-driven cures will vastly improve and save the lives of billions of people throughout the planet now and into the future. But to do so, we must move past our discipline’s usual boundaries and comfort zones and work collaboratively across fields — which means software engineers must delve into biology and genomics, and scientists must adopt new technologies, like machine learning, that could have previously appeared distant to them.
I believe machine learning is at the heart of this coming revolution in drug discovery, just as it has been pivotal in many other industries. From text translation and image classification to credit risk assessment, advertising optimization, and shopping item recommendation, we take these advances in machine learning for granted in our daily lives. But if we apply similar data-driven thinking in drug discovery, we could unlock hidden patterns in biological data that will allow us to understand diseases and produce cures thought previously unreachable.
Furthermore, this might be AI’s most impactful area yet, and here’s why.
First, the amount of biological data is increasing exponentially due to the decreasing cost associated with generating it. Due to advances in genomic sequencing throughput, it’s much more cost-efficient to adopt and operate sequencing machinery; in effect, pharmaceutical biotech companies, large academic initiatives and startups alike have been able to use this technology to generate data at a large scale.
However, since data from sequencing tends to be extremely high-dimensional (think an entire hard drive worth of data per blood draw), elementary observational statistics or standard algorithms can’t accurately detect the patterns that are critical for understanding how diseases work. But where these methods have failed to reach the best accuracies, machine learning has helped fill in the blanks and see the patterns previously invisible.
For example, last year machine learning and AI showed that we can understand protein binding more accurately than with any previous method.
In addition to the ease of generating data described above, with the advent of single-cell genomics, leading labs and companies have been able to analyze in-depth individual cells at scale, enabling a much more detailed assessment of the same biological samples. For example, 10x Genomics’ Single Cell Gene Expression, Immune Profiling device can target up to 10,000 cells in a single chip. Each chip can process eight libraries in parallel, allowing users to target up to 80,000 cells per chip. This has led to amassed datasets with hundreds of millions of cells, where one can measure, per cell, tens of thousands of molecules from tens of thousands of genes.
To put this into perspective, let’s compare assessing a gene in a blood sample with trying to assess the behavior of a set of people. What single-cell genomics brings is individual, in-depth, interviews with each person, when before you only had a census (that is an average across cells). These single-cell-based datasets will reach the many billions of cells in a few years, and I predict that a few leading companies will have datasets on tens of exabytes scale (10s of millions of terabytes) by 2025, making some internet-data-based datasets of today pale in comparison.
Finally, the boom. The Nobel prize-winning work of Jennifer Doudna and Emmanuelle Charpentier, who discovered the CRISPR (genetic editing) technology, is now being leveraged to edit biology itself. That is, we can now not only observe biology (that is, naturally existing, single cells) but also change the genetic code of the cells themselves. For example, earlier this year researchers at Tel Aviv University demonstrated that they could edit in a PLK1 gene into ovarian cancer tumor cells in mice, which strongly inhibited tumor growth, and resulted in an 80% increase in overall survival.
Typically, with most machine learning applications, we must work with naturally occurring, real-world examples. But now with CRISPR, we can modify our input variables, which creates data that allows machine learning practitioners to better learn cause-and-effect relationships for biology, not just correlation. Several companies have already employed CRISPR to generate and understand this biological data, from public corporations like Intellia to smaller startups like Caribou Biosciences.
Reaching this mechanistic, cause-and-effect level of understanding on large amounts of data will allow us all to better understand the code of life and disease. Going forward, the most significant successes will come from those who have the most and highest quality data, particularly single-cell genomics and CRISPR-edited data.
Every month, advances in protocols, reagents and equipment allow us to generate more robust and granular data that most of us had not imagined possible even a year ago. Just as the bold, multidisciplinary, decades-long effort led to the Moon and Mars landing, the intersection of biology, software engineering, and AI will lead to cures for many major diseases. We now have the opportunity to build the rocket ships that will help navigate humanity to longer, healthier, happier lives.
This article originally appeared on forbes.com, to read the full article and see the images, click here.
Nastel Technologies helps companies achieve flawless delivery of digital services powered by middleware. Nastel delivers Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate. To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s Navigator X fuses:
- Advanced predictive anomaly detection, Bayesian Classification, and other machine learning algorithms
- Raw information handling and analytics speed
- End-to-end business transaction tracking that spans technologies, tiers, and organizations
- Intuitive, easy-to-use data visualizations and dashboards
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering tools for Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT, IBM Cloud Pak for Integration and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics