Out-of-distribution (OOD) generalization is the ability of a machine learning model to maintain accurate performance on data drawn from a different probability distribution than its training data. This contrasts with standard in-distribution evaluation and is critical for deploying robust systems in real-world environments where data shifts are inevitable. The failure to generalize OOD is a primary source of model brittleness.
