Hallucinations are a feature, not a bug, of generative AI. Large language models like GPT-4 and Claude are designed for plausible text generation, not factual accuracy, making them fundamentally unsuited for unsupervised legal analysis where a single misstated clause creates liability.














