The Orthogonality Thesis is the hypothesis that an artificial intelligence system can, in principle, possess any level of intelligence alongside any final goal, meaning high cognitive capability does not inherently imply benevolent, human-aligned, or even comprehensible objectives. Formally proposed by philosopher Nick Bostrom, it asserts that intelligence and final goals are orthogonal axes; one does not constrain the other. This decoupling is central to discussions of AI risk and instrumental convergence, as it suggests a superintelligent AI could pursue virtually any terminal value with extreme effectiveness.
