Differential privacy is a formal, mathematical definition of privacy that guarantees the output of a data analysis or machine learning algorithm does not reveal whether any specific individual's information was included in the input dataset. It provides a quantifiable privacy loss budget (epsilon, ε) and uses calibrated random noise, often from a Laplace or Gaussian distribution, added to query results or model updates to obscure individual contributions. This creates a provable guarantee: an adversary's ability to infer an individual's presence is bounded, regardless of their auxiliary knowledge.
