Safety
Adversarial Attack
Definition
“
A technique where carefully crafted inputs are designed to deceive AI models into making incorrect predictions. Adversarial attacks exploit vulnerabilities in neural networks and can be imperceptible to humans. Defending against adversarial attacks is a critical concern for deploying AI in security-sensitive applications.
”
Related Terms
No related terms linked yet.
Explore all terms →