Boosting is a sequential ensemble technique that constructs a strong predictive model by iteratively training a series of weak learners, such as shallow decision trees. Each new learner is specifically trained to correct the errors made by the combined ensemble of its predecessors. The final model is a weighted sum (or vote) of all the weak learners, where models that perform better are assigned higher influence. This error-correcting focus makes boosting highly effective at reducing both bias and variance, leading to models that are often more accurate than any single constituent.
