Bootstrap aggregating (bagging) is an ensemble learning technique that reduces variance and improves model stability by training multiple base learners, typically decision trees, on different bootstrap samples (random subsets with replacement) drawn from the original training dataset and aggregating their predictions, usually by averaging for regression or majority voting for classification. This process, formalized by Leo Breiman in 1996, effectively mitigates overfitting by ensuring individual models are trained on varied data perspectives, making the collective output more robust than any single model.
