Few-shot context is the practice of including a small number of task-specific examples within a language model's prompt to demonstrate the desired input-output pattern, thereby steering the model's response without updating its internal weights. This technique directly exploits the model's emergent in-context learning (ICL) capability, allowing it to perform a new task based solely on the provided demonstrations. It is a fundamental method for contextual prompt engineering, enabling precise output formatting and behavior guidance within the constraints of the model's context window.
