Microsoft and MITRE release framework to help fend off adversarial AI attacks

Microsoft, the nonprofit MITRE Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch today released the Adversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with MITRE to build a schema that organizes the approaches employed by malicious actors in subverting machine learning models, bolstering monitoring strategies around organizations’ mission-critical systems.

According to a Gartner report, through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems. Despite these reasons to secure systems, Microsoft claims its internal studies find most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses responding to the Seattle company’s recent survey indicated they don’t have the right tools in place to secure their machine learning models.

The Adversarial ML Threat Matrix — which was modeled after the MITRE ATT&CK Framework — aims to address this with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE vetted to be effective against production systems. With input from researchers at the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University, Microsoft and MITRE created a list of tactics that correspond to broad categories of adversary action. Techniques in the schema fall within one tactic and are illustrated by a series of case studies covering how well-known attacks such as the Microsoft Tay poisoning, the Proofpoint evasion attack, and other attacks could be analyzed using the Threat Matrix.

Adversarial ML Threat Matrix

Above: The Adversarial ML Threat Matrix.

“The Adversarial Machine Learning Threat Matrix will … help security analysts think holistically. While there’s excellent work happening in the academic community that looks at specific vulnerabilities, it’s important to think about how these things play off one another,” Mikel Rodriguez, who oversees MITRE’s decision science research programs, said in a statement. “Also, by giving a common language or taxonomy of the different vulnerabilities, the threat matrix will spur better communication and collaboration across organizations.”

Microsoft and MITRE say they will solicit contributions from the community via GitHub, where the Adversarial ML Threat Matrix is now available. Researchers can submit studies detailing exploits that compromise the confidentiality, integrity, or availability of machine learning systems running on Amazon Web Services, Microsoft Azure, Google Cloud AI, or IBM Watson, or embedded in client or edge device. Those who submit research will retain the permission to share and republish their work, Microsoft says.

“We think that securing machine learning systems is an infosec problem,” Microsoft Azure engineer Ram Shankar Siva Kumar and corporate VP Ann Johnson wrote in a blog post. “The goal of the Adversarial ML Threat Matrix is to position attacks on machine learning systems in a framework that security analysts can orient themselves in these new and upcoming threat … It’s aimed at security analysts and the broader security community: the matrix and the case studies are meant to help in strategizing protection and detection; the framework seeds attacks on machine learning systems, so that they can carefully carry out similar exercises in their organizations and validate the monitoring strategies.”

Original Article