CWE-1039 Detail

CWE-1039

Automated Recognition Mechanism with Inadequate Detection or Handling of Adversarial Input Perturbations
Incomplete
2018-03-29
00h00 +00:00
2024-07-16
00h00 +00:00
Notifications for a CWE
Stay informed of any changes for a specific CWE.
Notifications manage

Name: Automated Recognition Mechanism with Inadequate Detection or Handling of Adversarial Input Perturbations

The product uses an automated mechanism such as machine learning to recognize complex data inputs (e.g. image or audio) as a particular concept or category, but it does not properly detect or handle inputs that have been modified or constructed in a way that causes the mechanism to detect a different, incorrect concept.

CWE Description

When techniques such as machine learning are used to automatically classify input streams, and those classifications are used for security-critical decisions, then any mistake in classification can introduce a vulnerability that allows attackers to cause the product to make the wrong security decision. If the automated mechanism is not developed or "trained" with enough input data, then attackers may be able to craft malicious input that intentionally triggers the incorrect classification.

Targeted technologies include, but are not necessarily limited to:

  • automated speech recognition
  • automated image recognition

For example, an attacker might modify road signs or road surface markings to trick autonomous vehicles into misreading the sign/marking and performing a dangerous action.

General Informations

Modes Of Introduction

Architecture and Design : This issue can be introduced into the automated algorithm itself.

Applicable Platforms

Language

Class: Not Language-Specific (Undetermined)

Technologies

Name: AI/ML (Undetermined)

Common Consequences

Scope Impact Likelihood
IntegrityBypass Protection Mechanism

Note: When the automated recognition is used in a protection mechanism, an attacker may be able to craft inputs that are misinterpreted in a way that grants excess privileges.

Vulnerability Mapping Notes

Justification : This CWE entry is a Class and might have Base-level children that would be more appropriate
Comment : Examine children of this entry to see if there is a better fit

NotesNotes

Further investigation is needed to determine if better relationships exist or if additional organizational entries need to be created. For example, this issue might be better related to "recognition of input as an incorrect type," which might place it as a sibling of CWE-704 (incorrect type conversion).

References

REF-16

Intriguing properties of neural networks
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus.
https://arxiv.org/abs/1312.6199

REF-17

Attacking Machine Learning with Adversarial Examples
OpenAI.
https://openai.com/research/attacking-machine-learning-with-adversarial-examples

REF-15

Magic AI: These are the Optical Illusions that Trick, Fool, and Flummox Computers
James Vincent.
https://www.theverge.com/2017/4/12/15271874/ai-adversarial-images-fooling-attacks-artificial-intelligence

REF-13

CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition
Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, Xiaofeng Wang, Carl A. Gunter.
https://arxiv.org/pdf/1801.08535.pdf

REF-14

Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini, David Wagner.
https://arxiv.org/abs/1801.01944

Submission

Name Organization Date Date release Version
CWE Content Team MITRE 2018-03-12 +00:00 2018-03-29 +00:00 3.1

Modifications

Name Organization Date Comment
CWE Content Team MITRE 2019-06-20 +00:00 updated References
CWE Content Team MITRE 2020-02-24 +00:00 updated Relationships
CWE Content Team MITRE 2023-04-27 +00:00 updated References, Relationships
CWE Content Team MITRE 2023-06-29 +00:00 updated Mapping_Notes
CWE Content Team MITRE 2024-07-16 +00:00 updated Applicable_Platforms