Monte Carlo Tree Search (MCTS) is a best-first, rollout-based planning algorithm for navigating large decision spaces, most famously applied in game-playing AI like AlphaGo. It operates by iteratively building a search tree through four phases: selection, expansion, simulation, and backpropagation. The core mechanism uses random playouts (simulations) to estimate the value of tree nodes, guiding the search toward promising regions without requiring a domain-specific heuristic function.
