Chain-of-Verification (CoVe) is a method where a language model first generates a baseline answer to a query, then autonomously plans and executes a series of targeted verification questions to fact-check its own initial response, and finally produces a revised, more accurate answer. This process introduces a self-critical loop into standard Chain-of-Thought reasoning, explicitly separating the generation of claims from their systematic verification. The technique is designed to mitigate hallucinations by forcing the model to scrutinize its own outputs against its internal knowledge or retrieved evidence.
