Machine learning could transform medicine. Should we let it?
Machine learning is starting to take over analyzing medical images. But AI tools also raise worrying questions because they solve problems in ways that humans can’t always follow.
In clinics around the world, a type of artificial intelligence called deep learning is starting to supplement or replace humans in common tasks such as analyzing medical images. Already, at Massachusetts General Hospital in Boston, “every one of the 50,000 screening mammograms we do every year is processed through our deep learning model, and that information is provided to the radiologist,” says Constance Lehman, chief of the hospital’s breast imaging division.
In deep learning, a subset of a type of artificial intelligence called machine learning, computer models essentially teach themselves to make predictions from large sets of data. The raw power of the technology has improved dramatically in recent years, and it’s now used in everything from medical diagnostics to online shopping to autonomous vehicles.
But deep learning tools also raise worrying questions because they solve problems in ways that humans can’t always follow. If the connection between the data you feed into the model and the output it delivers is inscrutable—hidden inside a so-called black box—how can it be trusted? Among researchers, there’s a growing call to clarify how deep learning tools make decisions—and a debate over what such interpretability might demand and when it’s truly needed. The stakes are particularly high in medicine, where lives will be on the line.
DEEP LEARNING TOOLS ALSO RAISE WORRYING QUESTIONS BECAUSE THEY SOLVE PROBLEMS IN WAYS THAT HUMANS CAN’T ALWAYS FOLLOW.
Still, the potential benefits are clear. In Mass General’s mammography program, for instance, the current deep learning model helps detect dense breast tissue, a risk factor for cancer. And Lehman and Regina Barzilay, a computer scientist at the Massachusetts Institute of Technology, have created another deep learning model to predict a woman’s risk of developing breast cancer over five years—a crucial component of planning her care. In a 2019 retrospective study of mammograms from about 40,000 women, the researchers found the deep learning system substantially outperformed the current gold-standard approach on a test set of about 4,000 of these women. Now undergoing further testing, the new model may enter routine clinical practice at the hospital.
As for the debate about whether humans can really understand deep learning systems, Barzilay sits firmly in the camp that it’s possible. She calls the black box problem “a myth.”
One part of the myth, she says, is that deep learning systems can’t explain their results. But “there are lots of methods in machine language that allow you to interpret the results,” she says. Another part of the myth, in her opinion, is that doctors have to understand how the system makes its decision in order to use it. But medicine is crammed with advanced technologies that work in ways that clinicians really don’t understand—for instance, the magnetic resonance imaging (MRI) that gathers the mammography data to begin with.
That doesn’t answer the concerns of all physicians. Many machine learning tools are still black boxes “that render verdicts without any accompanying justification,” notes a group of physicians and researchers in a recent paper in BMJ Clinical Research. “Many think that, as a new technology, the burden of proof is on machine learning to account for its predictions,” the paper’s authors continue. “If doctors do not understand why the algorithm made a diagnosis, then why should patients trust the recommended course of treatment?”
And among computer scientists who study machine learning, “this discussion of interpretability has gone completely off the rails,” says Zachary Lipton, a computer scientist at Carnegie Mellon University. Often, models offered for interpretability simply don’t work well, he says, and there’s confusion about what the systems actually deliver.
“We have people in the field who are able to turn the crank but don’t actually know what they’re doing,” he adds, “and don’t actually understand the foundational underpinnings of what they’re doing.”
This article originally appeared on fastcompany.com To read the full article and see the images, click here.
Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously. To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:
- Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
- Raw information handling and analytics speed
- End-to-end business transaction tracking that spans technologies, tiers, and organizations
- Intuitive, easy-to-use data visualizations and dashboards
Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT and many more.
The Nastel i2M Platform provides:
- Secure self-service configuration management with auditing for governance & compliance
- Message management for Application Development, Test, & Support
- Real-time performance monitoring, alerting, and remediation
- Business transaction tracking and IT message tracing
- AIOps and APM
- Automation for CI/CD DevOps
- Analytics for root cause analysis & Management Information (MI)
- Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics