Higher-Order Theory of Mind is the recursive capacity to attribute mental states about mental states, extending beyond simple first-order ('I think X') or second-order ('I think you think X') attribution. In artificial intelligence, it enables an autonomous agent to model not just the beliefs and intentions of other agents, but also to understand what those agents believe about its own mental states, facilitating deep strategic reasoning and cooperation. This is formalized in multi-agent epistemic logic as nested knowledge operators (e.g., 'Agent A knows that Agent B knows that Agent A wants X').
