AI Algorithms susceptible to hacking
News / AI Algorithms susceptible to hacking
Artificial intelligence programs that are used to analyse medical diagnostic images for traces of cancer have the potential to speed up and improve the accuracy of a diagnosis. However new evidence suggests that despite the leaps and bounds made in this particular technological field, there are risks. The AI programs used to scan images are vulnerable to hacking. Researchers from the University of Pittsburgh simulated an attack that falsified images from a mammogram and those changes were able to fool both an AI tool and human radiologists.
The research team designed a computer program that would make mammograms that originally appeared to have no signs of cancer look like they were cancerous, and make mammograms that look cancerous appear to have no signs of cancer. They then fed the tampered scans to an AI program trained to spot signs of breast cancer and tasked five radiologists to deduce which images were real or fake. Roughly 70% of the manipulated images managed to fool the program. As for the radiologists, some were better at identifying the manipulated images than others. Their accuracy at identifying the fake images varied between 29-71%.
If an incident such as this were to occur, an incorrect diagnosis could be made. An AI program used to help analyse mammograms might report a healthy scan when in fact there are signs of cancer or incorrectly display that a patient does have cancer when they do not. Hacks such as this are not known to have happened in the real world yet, however, the new research suggests that healthcare organisations need to be prepared for them as they could seriously endanger patients lives. By working to understand how AI models behave under malicious attacks in a medical context, healthcare organisations can start thinking about ways to make these models robust and less susceptible to a possible attack.
The healthcare industry has always been a high profile target for cybercriminals, with hackers most often conducting attacks to steal patient data, which is a valuable commodity on the black market, or encrypt an organisation’s network until a ransom is paid. Both of those attack strategies can harm patients by delaying operations at hospitals and making it harder for healthcare workers to deliver the appropriate care, but with the growing possibility of, further reaching, direct attacks on people’s health, experts are becoming more concerned. An example of a potentially fatal attack lies with internet-controlled insulin pumps. Researchers have found that hackers can remotely access internet-connected insulin pumps and deliver dangerous doses of medication.
Other studies have also revealed that cyber attacks on medical images could lead to incorrect diagnoses. Back in 2019, it was revealed that hackers were able to add or remove evidence of lung cancer from CT scans. These changes were able to fool both human radiologists and AI programs.
There are several reasons that threat actors may want to conduct an attack like this. They may be interested in targeting specific patients such as a political figure or they may want to tamper with their own scans to acquire money from their insurance provider. Cybercriminals may also continually target a hospital until a ransom is paid.
No matter the reasoning a hacker may have to attempt an attack like this, it demonstrates that the healthcare sector and those designing AI models should be aware that hacking that alters medical scans are becoming a distinct possibility. Models should be introduced to tampered imagery when being programmed, to teach them to spot fakes. It is also a reasonable idea for healthcare organisations to consider training radiologists to identify fake images so that no patient receives an incorrect diagnosis.
How secure is
How secure is