In-context learning (ICL) is the emergent ability of a large language model to learn and perform a new task based solely on a few examples, or a demonstration, provided within its input prompt, without updating its internal parameters via gradient descent. This capability is a cornerstone of prompt engineering and is fundamentally constrained by the model's context window. The model infers the pattern, format, and objective from the provided few-shot examples and applies it to a new query.
