Generated Knowledge Prompting is a two-stage prompting technique where a language model is first instructed to generate relevant facts, concepts, or knowledge about a query, and these generated statements are then provided as additional context in a second, separate prompt to produce a final, more informed and accurate answer. This method explicitly decouples the knowledge retrieval phase from the answer synthesis phase, forcing the model to articulate its foundational understanding before reasoning. It is particularly effective for complex, knowledge-intensive questions where a model's parametric memory may be incomplete or imprecise.
