Bias (AI Bias)
Safety & EthicsSystematic errors in AI output that reflect prejudices in training data or design choices, leading to unfair or skewed results.
AI bias occurs when an AI system produces results that systematically favor or disadvantage certain groups. This happens because AI learns from data, and data reflects the biases of the humans who created it and the society it came from.
Common examples: image generators producing mostly white faces when asked for 'professional,' hiring AI screening out women's resumes because it learned from historically male-dominated hiring data, or language models associating certain professions with specific genders.
Bias isn't always obvious or intentional. It can be embedded in word associations, image training sets, or even the way evaluation benchmarks are designed. AI companies invest heavily in bias detection and mitigation, but no system is fully bias-free. When using AI for important decisions, always review outputs critically.
Real-World Example
If you ask an image generator for 'a CEO' and get only men in suits — that's AI bias from training data reflecting historical patterns.
Related Terms
More in Safety & Ethics
FAQ
What is Bias (AI Bias)?
Systematic errors in AI output that reflect prejudices in training data or design choices, leading to unfair or skewed results.
How is Bias (AI Bias) used in practice?
If you ask an image generator for 'a CEO' and get only men in suits — that's AI bias from training data reflecting historical patterns.
What concepts are related to Bias (AI Bias)?
Key related concepts include Alignment, Red Teaming, Guardrails. Understanding these together gives a more complete picture of how Bias (AI Bias) fits into the AI landscape.