Algorithmic Bias
Also known as: AI Bias, Machine Learning Bias, Model Bias
Systematic errors in AI systems that create unfair outcomes, typically reflecting biases present in training data or design choices.
Algorithmic bias occurs when AI systems produce systematically prejudiced results due to flawed assumptions in training data or model design.
Sources of Bias
- Training data: Historical inequities reflected in datasets
- Labeling: Human annotator biases encoded in labels
- Sampling: Underrepresentation of certain groups
- Feature selection: Proxies for protected characteristics
- Objective functions: Optimization targets that disadvantage groups
Real-World Examples
- Hiring algorithms filtering out women
- Facial recognition failing on darker skin tones
- Credit scoring disadvantaging minorities
- Healthcare algorithms underserving Black patients
Mitigation
- Diverse, representative training data
- Bias audits and fairness metrics
- Human oversight on high-stakes decisions
- Transparency about limitations
- Ongoing monitoring post-deployment
The Challenge
Bias often reflects societal inequities—AI can amplify or reduce them depending on how it’s built and deployed.