Explore our Topics:

Attackers are ‘poisoning’ AI systems

NIST has published a review of emerging attacks against AI systems of which they say there is no foolproof defense as of now.
By admin
Jan 12, 2024, 5:37 PM

Cyberattackers can ‘poison’ AI systems to malfunction, according to researchers at the National Institute of Standards and Technology (NIST). Their report evaluates emerging types of attacks against AI systems as part of their effort to foster the development of trustworthy AI through the AI Risk Management Framework 

 “We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev who co-authored the publication. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”   

Types of cyberattacks on AI systems

The report categorizes the attacks into four major types: evasion, poisoning, privacy, and abuse attacks. Each type of attack has unique characteristics, with different goals, capabilities, and knowledge requirements for the attackers. 

  • Evasion: Evasion attacks are conducted after the deployment of an AI system, where attackers aim to mislead the system by altering inputs. For example, introducing deceptive markings on a road could mislead an autonomous vehicle into veering into oncoming traffic. 
  •  Poisoning: Poisoning attacks target the training phase of an AI system by inserting corrupted data. An example highlighted in the report includes adding inappropriate language to a chatbot’s training data, which could result in the chatbot using this language in real customer interactions. 
  • Privacy: Privacy attacks occur during an AI system’s deployment, aiming to extract sensitive information about the system or the data it has been trained on. This could enable attackers to reverse engineer the AI model and identify its vulnerabilities. 
  • Abuse: Abuse attacks involve introducing incorrect information into a source that an AI system uses, such as a corrupted webpage or document, potentially leading the AI to assimilate false information and behave in unintended ways. 

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.” 

Authors of the report call the solutions for mitigating these types of attacks “incomplete at best,” and emphasize that no defense at this point is foolproof.  

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” Vassilev said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.” 


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.