Security of AI-enabled Medical Systems

MY ALT TEXT

AI/ML-Enabled Medical Systems: Where the promises of ML meet the perils of cybersecurity risks

Check out our latest paper titled "SAM: Foreseeing Inference-Time False Data Injection Attacks onML-enabled Medical Devices", accepted and to be presented at HealthSec 2024, a workshop co-located with ACM CCS 2024, on October 14, 2024! this link!

The Story

From diagnosing patients to surgical assistance, Artificial Intelligence is now seen as a game-changer in various medical disciplines. As of October 2024, the FDA has approved 950 AI/ML-enabled devices for commercial use, and the number keeps growing every year. However, our studies show that for ~75% of these devices, the FDA pre-market summaries do not contain any mention of security risk assessment. Around 22% of them perform device-specific assessments, which, as we eventually discover, are inadequate. On this research journey, we set out to investigate how secure these devices are from various known cyber threats. In this process, we started looking for answers to the following crucial questions.

  1. Do the manufacturers of the AI-enabled devices implement any security measure? If so, what are they, and how do they assess the security risks to the devices they design?
  2. The AI-enabled devices need to be interfaced with various other medical and non-medical peripheral devices (manufactured by different vendors) at deployment. Can any of these peripherals pose a threat to the security of the ML engine?
  3. In case of a post-deployment security breach in a multi-vendor AI-enabled system, who is to be held accountable, and what is the best strategy to mitigate the threat?
  4. In the event of a security breach on an AI-enabled medical device, do all patients get affected to the same extent? If not, why?
We soon figured that these questions are not trivial, and our quest for the answers eventually laid the foundation of this project. Stay tuned as we uncover the mysteries!

The Team