Back to GlossarySafety

Bias

Definition

Systematic errors in AI systems that lead to unfair or discriminatory outcomes, often reflecting biases present in training data or design choices.

AI bias manifests in multiple forms: data bias (training data that underrepresents or misrepresents certain groups), algorithmic bias (model architectures or training procedures that amplify disparities), and deployment bias (using AI in contexts different from its training conditions). High-profile examples include facial recognition systems with higher error rates for dark-skinned faces, hiring algorithms that disadvantaged women, and language models that associate certain professions with specific genders. Addressing bias requires diverse and representative training data, fairness metrics and auditing, inclusive development teams, and ongoing monitoring in production. Regulatory frameworks like the EU AI Act increasingly require bias assessments for high-risk AI applications. Bias mitigation is both a technical challenge and an ethical imperative.

Companies in Safety

View Safety companies →