First-order Theory of Mind (ToM) is the computational ability of an artificial intelligence agent to attribute a single layer of mental states—such as beliefs, desires, intentions, or knowledge—to another agent. It allows an AI to model that another entity has an internal perspective that may differ from objective reality or its own perspective. For example, an agent with first-order ToM can understand that "Alice believes the package is in Room A," even if the agent itself knows the package has been moved to Room B. This capability is distinct from zero-order reasoning (which lacks any mental state modeling) and second-order or higher-order ToM (which involves recursive modeling, e.g., "Alice believes that Bob believes X"). In AI, this is typically implemented as a form of belief attribution within an agent's internal world model, often using Bayesian inference or learned representations to predict another agent's likely knowledge given their perceptual history and actions.