AI-powered public service allocation automates and scales the biases present in its training data, transforming historical inequity into systemic policy with a false seal of algorithmic objectivity. This is the core risk of deploying models without rigorous AI TRiSM frameworks for fairness and explainability.














