The liability shifts from operator error to design defect. In traditional machinery, an accident is often attributed to human misuse. With AI-controlled systems, the failure mode analysis points directly to the model developer, the system integrator, or the data pipeline. If a collaborative robot (cobot) misinterprets human intent due to a gap in its training data, the manufacturer of the AI stack, not the factory worker, bears the legal risk. This creates a product liability quagmire for firms using closed-source models from providers like NVIDIA's Isaac or Boston Dynamics.