Consistent hashing is a distributed hashing algorithm that minimizes the amount of data that must be moved when nodes are added to or removed from a cluster, a critical property for scalable systems like distributed caches (e.g., Memcached, Redis) and object storage (e.g., Amazon Dynamo, Cassandra). Unlike traditional modular hashing, which reassigns nearly all keys when the hash table size changes, consistent hashing maps both data and nodes to a fixed circular hash ring, assigning each key to the first node encountered clockwise. This design ensures only a fraction K/N of the keys (where N is the total nodes) are remapped during a topology change, providing horizontal scalability and fault tolerance.
