AI-generated code is monolithic. Large Language Models (LLMs) like GPT-4 and Claude 3 produce code by predicting the most statistically likely next token within a given context, a process that inherently optimizes for local cohesion over modular design. This results in tightly-coupled, sprawling functions that replicate the training data's bias towards integrated, rather than distributed, systems.














