Safety
Responsible AI
Definition
“
The practice of developing and deploying AI systems that are fair, transparent, accountable, and aligned with societal values. Responsible AI encompasses bias detection, explainability, privacy preservation, and ethical guidelines. Major tech companies and governments have published responsible AI frameworks and principles.
”
Related Terms
No related terms linked yet.
Explore all terms →