News

'Sponge Attacks,' Poisoned Models and Other Healthcare AI Risks

It's no secret that healthcare is a particularly lucrative target of cybersecurity attacks, but generative AI threatens to make the problem worse.  

In its most recent security forecast, Google warned that generative AI will make identifying phishing e-mails and other attack vectors increasingly difficult. According to the report: 

Generative AI and large language models (LLMs) will be utilized in phishing, SMS, and other social engineering operations to make the content and material (including voice and video) appear more legitimate. Misspellings, grammar errors, and lack of cultural context will be harder to spot in phishing emails and messages. LLMs will be able to translate and clean up translations too, making it even harder for users to spot phishing based on the verbiage itself.  

This, coupled with the fact that healthcare employees are among the most susceptible to phishing attacks, makes it clear that medical organizations can't afford to let their guards down around AI. 

In a recent post, Microsoft Technical Specialist Ben Henderson identified 10 AI-driven attack methods that pose outsized risks to the healthcare industry.

His list of "[t]op 10 AI based attacks and security concerns" is as follows:

  1. Weaponized AI models, in which pretrained AI models carry malicious code that can launch ransomware attacks.
  2. AI poisoning attacks, in which attackers compromise the data on which an AI model is trained. 
  3. Data security breaches.
  4. Sponge attacks, an emerging denial-of-service attack in which an "AI model [is made] inaccessible by generating input that consumes the model's hardware resources."
  5. Prompt injection, in which attackers purposely use prompts that result in AI returning "wrong or malicious output."
  6. AI model theft.
  7. AI-created phishing and business e-mail compromise (BEC) traps.
  8. Evasion attacks, which "can fool detection or classification systems by using some visual deception."
  9. Malware and vulnerability exploitation with generative AI.
  10. Deepfakes.

"It is crucial for healthcare organizations to stay informed and take proactive measures to protect their AI systems and data," Henderson wrote.

Henderson's full post, with details about each attack method, is available here

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Must Read Articles

Welcome to MedCloudInsider.com, the new site for healthcare IT Pros looking for insights on cloud and other cutting-edge IT tech.
Sign up now for our newsletter and don’t miss out! Sign Up Today