Contrastive learning is a self-supervised machine learning paradigm that learns useful data representations by training a model to distinguish between similar (positive) and dissimilar (negative) data pairs. The core objective is to pull the embeddings of positive pairs closer together in a latent space while pushing negative pairs apart. This technique is foundational for representation learning, enabling models to develop robust, compressed understandings of data without requiring manually labeled datasets.
