Public sector AI models trained on historical data inherit systemic bias. Historical records for benefits, permits, and law enforcement reflect decades of human prejudice and procedural inequity. Training on this data without correction produces discriminatory algorithms that scale past injustices, violating ethical mandates and new regulations like the EU AI Act.














