A fairness constraint is a mathematical or programmatic rule applied during an AI model's training or inference to enforce a specific statistical fairness metric, such as demographic parity or equality of opportunity. It acts as a formal mechanism within Constitutional AI and bias mitigation frameworks, directly shaping the model's optimization objective to reduce discriminatory outcomes across protected attributes like race or gender. This transforms ethical principles into actionable engineering requirements.
