Constitutional prompting is an inference-time technique where a language model's system prompt explicitly includes a set of governing principles—a 'constitution'—that the model must use to critique and revise its own outputs. Unlike fine-tuning, it operates purely through in-context instructions, directing the model to evaluate its draft response against listed rules for harm, bias, or legality before finalizing an answer. This creates an explicit self-critique loop guided by the provided constitution.
