Manipulating Digital Mammograms Via Artificial Intelligence May Cause Misdiagnosis

Mammography has been a critical procedure for diagnosing breast cancer. Yet, at the same time, the exposure to radiations has received criticism in many cases. That’s why machine learning algorithms have gained significant importance in medical imaging. Leveraging these algorithms can help in reconstructing digital images with minimal exposure to radiations and from raw data. However recent research has shown that bad actors can manipulate potential digital mammograms by exploiting AI.

Digital Mammograms Vulnerable To Cyber Attacks

A recent study has demonstrated how digital mammograms are vulnerable to manipulation and cyber attacks. The researchers exploited CycleGAN for manipulating digital images by injecting or removing malignant features. They have compiled their work as a separate research paper.

As explained in the paper, the researchers tested Cycle-consistent Generative Adversarial Network (CycleGAN) for mammogram manipulations. GANs constitute a subclass of deep learning algorithms that hold significant importance in artificial intelligence and machine learning. Explaining about how GANs work, they stated,

“A GAN consists of two neural networks competing against each other: The first, generator network (G), manipulates sample images and the second, discriminator network (D), has to distinguish between real and manipulated samples [8]. Due to their opposed cost function, the neural networks are competing against each other in order to improve their performance.”

Leveraging the features, the researchers attempted to use GAN for modifying digital mammograms, believing that the tool would learn the implicit representations of cancer mammograms. In additions, they also tested if the radiologists could distinguish manipulated images from the real ones.

Summing Up The Research Methodology

To demonstrate their hypothesis, the researchers ran two sets of experiments. In the first experiment, they modified mammographic images having low resolution (256 × 256 pixels) from the publicly available datasets. In the second experiment, they attempted to train the algorithm for high-resolution images. After that, they presented the manipulated images to three different radiologists for identifying the alterations. Consequently, only one radiologist could notice the manipulations in low-resolution images. Whereas, identifying problems in high-resolution images was not really difficult for all.

Describing the results of the study, the researchers state,

“We found that at low and slightly higher resolution, these features were realistic enough to change the radiologists’ diagnosis from healthy to suspicious and vice versa. While at low resolution, there were no or very little artifacts that distinguished the modified images from real ones, at the higher resolution these artifacts became obvious to the point that the modified images were recognized more easily.”

What’s The Risk?

Healthcare sector has already been on the hit list for criminal hackers. We already know of various cyber attacks, and data breaches affecting medical devices. However, the present findings demonstrate particular risks to medical diagnostics owing to the increased use of AI. As explained by the researchers,

“As machine learning or artificial intelligence (AI) algorithms will increasingly be used in the clinical routine, whether to reduce the radiation burden by reconstructing images from low-dose raw data or help diagnose diseases their widespread implantation would also render them attractive targets for attacks.”

The most obvious consequence of such a cyber attack is misdiagnosing healthy individuals with cancers, and vice versa. This could eventually target everyone from high-profile individuals (to achieve certain) to massive adverse effects on the whole healthcare system.

Though, the researchers state that the technology has not reached advanced levels for now. Nonetheless, the matter deserves further research and significant attention to prevent potential AI-mediated attacks on the diagnostic sector in the coming years.

Related posts

GoPlus’s Latest Report Highlights How Blockchain Communities Are Leveraging Critical API Security Data To Mitigate Web3 Threats

C2A Security’s EVSec Risk Management and Automation Platform Gains Traction in Automotive Industry as Companies Seek to Efficiently Meet Regulatory Requirements

ZenHammer Memory Attack Exploits Rowhammer Against AMD CPUs