Skip to main content

Attacks on machine learning (ML) systems are being developed and released with increased regularity. Attacks have historically been performed in controlled settings, but attacks are increasingly observed on production systems. Deployed ML systems can have many vulnerabilities, for example trained on personally identifiable information, trusted to make critical decisions with little oversight, and have little to no logging and alerting attached to their use.

MITRE ATLAS™ case studies are selected because of the impact to production ML systems. Each demonstrates one of the following characteristics:

  1. Range of Attacks: Evasion, poisoning, model replication and exploiting traditional software flaws.
  2. Range of Personas: Average user, security researchers, ML researchers and fully-equipped Red team.
  3. Range of ML Paradigms: Attacks on MLaaS, ML models hosted on cloud, hosted on-premise, ML models on edge.
  4. Range of Use Case: Attacks on ML systems used in both “security-sensitive” applications like cybersecurity and non-security-sensitive applications like chatbots.

Browse the list of case studies here


Leave a Reply