Machine learning could transform medicine. Should we let it?

Machine learning could transform medicine. Should we let it?

Machine learning could transform medicine. Should we let it?

Machine learning is starting to take over analyzing medical images. But AI tools also raise worrying questions because they solve problems in ways that humans can’t always follow.

In clinics around the world, a type of artificial intelligence called deep learning is starting to supplement or replace humans in common tasks such as analyzing medical images. Already, at Massachusetts General Hospital in Boston, “every one of the 50,000 screening mammograms we do every year is processed through our deep learning model, and that information is provided to the radiologist,” says Constance Lehman, chief of the hospital’s breast imaging division.

In deep learning, a subset of a type of artificial intelligence called machine learning, computer models essentially teach themselves to make predictions from large sets of data. The raw power of the technology has improved dramatically in recent years, and it’s now used in everything from medical diagnostics to online shopping to autonomous vehicles.

But deep learning tools also raise worrying questions because they solve problems in ways that humans can’t always follow. If the connection between the data you feed into the model and the output it delivers is inscrutable—hidden inside a so-called black box—how can it be trusted? Among researchers, there’s a growing call to clarify how deep learning tools make decisions—and a debate over what such interpretability might demand and when it’s truly needed. The stakes are particularly high in medicine, where lives will be on the line.

DEEP LEARNING TOOLS ALSO RAISE WORRYING QUESTIONS BECAUSE THEY SOLVE PROBLEMS IN WAYS THAT HUMANS CAN’T ALWAYS FOLLOW.

Still, the potential benefits are clear. In Mass General’s mammography program, for instance, the current deep learning model helps detect dense breast tissue, a risk factor for cancer. And Lehman and Regina Barzilay, a computer scientist at the Massachusetts Institute of Technology, have created another deep learning model to predict a woman’s risk of developing breast cancer over five years—a crucial component of planning her care. In a 2019 retrospective study of mammograms from about 40,000 women, the researchers found the deep learning system substantially outperformed the current gold-standard approach on a test set of about 4,000 of these women. Now undergoing further testing, the new model may enter routine clinical practice at the hospital.

As for the debate about whether humans can really understand deep learning systems, Barzilay sits firmly in the camp that it’s possible. She calls the black box problem “a myth.”

One part of the myth, she says, is that deep learning systems can’t explain their results. But “there are lots of methods in machine language that allow you to interpret the results,” she says. Another part of the myth, in her opinion, is that doctors have to understand how the system makes its decision in order to use it. But medicine is crammed with advanced technologies that work in ways that clinicians really don’t understand—for instance, the magnetic resonance imaging (MRI) that gathers the mammography data to begin with.

That doesn’t answer the concerns of all physicians. Many machine learning tools are still black boxes “that render verdicts without any accompanying justification,” notes a group of physicians and researchers in a recent paper in BMJ Clinical Research. “Many think that, as a new technology, the burden of proof is on machine learning to account for its predictions,” the paper’s authors continue. “If doctors do not understand why the algorithm made a diagnosis, then why should patients trust the recommended course of treatment?”

And among computer scientists who study machine learning, “this discussion of interpretability has gone completely off the rails,” says Zachary Lipton, a computer scientist at Carnegie Mellon University. Often, models offered for interpretability simply don’t work well, he says, and there’s confusion about what the systems actually deliver.

“We have people in the field who are able to turn the crank but don’t actually know what they’re doing,” he adds, “and don’t actually understand the foundational underpinnings of what they’re doing.”

This article originally appeared on fastcompany.com To read the full article and see the images, click here.

Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously.  To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:

  • Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
  • Raw information handling and analytics speed
  • End-to-end business transaction tracking that spans technologies, tiers, and organizations
  • Intuitive, easy-to-use data visualizations and dashboards