The Iterative Linear Quadratic Regulator (iLQR) is a trajectory optimization algorithm that efficiently computes locally optimal control sequences for nonlinear dynamical systems by iteratively solving a series of linear-quadratic approximations. It is a differential dynamic programming (DDP) method that leverages a second-order expansion of the value function to achieve fast, quadratic convergence near an optimum. The algorithm works by taking a nominal trajectory (an initial guess of states and controls), linearizing the system dynamics around it, and quadratizing the cost function. It then solves the resulting Linear Quadratic Regulator (LQR) problem to compute a feedforward and feedback control policy that improves the trajectory. This new trajectory becomes the nominal for the next iteration, repeating until convergence. iLQR is prized for its sample efficiency in model-based reinforcement learning (MBRL), as it requires only a differentiable dynamics model and cost function, not millions of environment interactions.