Artificial Intelligence

Diagnostic Artificial Intelligence Models Can Be Tricked By Cyberattacks

Nastel Technologies®
December 20, 2021
Comments: 1

Researchers discovered that diagnostic artificial intelligence models used to detect cancer were fooled by cyberattacks that falsify medical images.

 

Diagnostic artificial intelligence (AI) models hold promise in clinical research, but a new study conducted by University of Pittsburgh researchers and published in Nature Communications found that cyberattacks using falsified medical images could fool AI models.

 

The study shed light on the concept of “adversarial attacks,” in which bad actors aim to alter images or other data points to make AI models draw incorrect conclusions. The researchers began by training a deep learning algorithm that was able to identify cancerous and benign cases with more than 80 percent accuracy.

 

Then, the researchers developed a “generative adversarial network” (GAN), which is a computer program that generates false images by misplacing cancerous regions from negative or positive images to confuse the model.

 

The AI model was fooled by 69.1 percent of the falsified images. Of the 44 positive images made to look negative, the model identified 42 as negative. Of the 319 negative images doctored to look positive, the AI model classified 209 as positive.

 

“What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue,” Shandong Wu, PhD, the study’s senior author and associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, explained in a press release.

 

“By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust.”

 

Artificial intelligence models have become increasingly useful in improving cancer care and early diagnosis. But as with any new technology, researchers should consider cyber risks.

 

Later in the experiment, the researchers gathered five radiologists to determine whether mammogram images were real or fake. The radiologists identified the authentic images with a varying degree of accuracy, between 29 and 71 percent depending on the individual.

 

“Certain fake images that fool AI may be easily spotted by radiologists. However, many of the adversarial images in this study not only fooled the model, but they also fooled experienced human readers,” Wu continued.

“Such attacks could potentially be very harmful to patients if they lead to an incorrect cancer diagnosis.”

 

The sheer volume of data that AI models can maintain makes them a valuable asset to protect, but also an enticing target for threat actors. In addition, clinical researchers and healthcare organizations should consider cyber risks before engaging with a third-party AI vendor.

 

The researchers are now exploring “adversarial training” for the AI model, which would involve pre-generating adversarial images and teaching the model that the images were falsified. AI models are extremely self-sufficient once they are running, but it is still crucial that humans are overseeing the safety and security of such models. With adequate security practices in place, AI could become part of healthcare’s infrastructure on a larger scale.

 

This article originally appeared on healthitsecurity.com, to read the full article, click here.

Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, and many more.

 

The Nastel i2M Platform provides:

  • Secure self-service configuration management with auditing for governance & compliance
  • Message management for Application Development, Test, & Support
  • Real-time performance monitoring, alerting, and remediation
  • Business transaction tracking and IT message tracing
  • AIOps and APM
  • Automation for CI/CD DevOps
  • Analytics for root cause analysis & Management Information (MI)
  • Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics

Comments

  • Dave
    December 21, 2021
    Fascinating piece of research, but what is unclear from this summary of this study is if the AI is actually more or less reliable than human analysis. 5 radiologists is quite a small sample, and the range of accuracy of fake detection is so wide that the results of the AI actually fit inside that range. Are the volumes of tests enough to be statistically viable? And of course the real issue is that compromised images could take many forms, some could be complete real images replacing the image to be tested, while others could have artifacts added or removed. So is the issue AI or the ability to secure the way an image is managed from creation to analysis. That then becomes a integration infrastructure management (i2M) problem.
Write a comment
Leave a Reply
Your email address will not be published. Required fields are marked *
Comment * This field is required!
First name * This field is required!
Email * Please, enter valid email address!
Website

Register to Download

Subscribe

Schedule a Meeting to Learn More

Become an Expert

Schedule a Demo