Chain-of-Thought (CoT) prompting is a technique that instructs a large language model (LLM) to articulate its intermediate reasoning steps explicitly before delivering a final answer. By providing the model with few-shot examples or a meta-instruction (e.g., "Let's think step by step"), it decomposes a complex query into a sequence of simpler sub-problems. This method transforms the model's output from an opaque, single-token answer into an explicit reasoning trace, making the logic auditable and significantly improving performance on tasks requiring arithmetic, commonsense, or symbolic reasoning.
