Contrastive learning is a self-supervised machine learning technique that trains a model to distinguish between similar (positive) and dissimilar (negative) data pairs by pulling positive pairs closer together and pushing negative pairs apart in the embedding space. This process, guided by a contrastive loss function like triplet loss or InfoNCE, teaches the model to encode semantic relationships directly into the geometric structure of the vector space it creates, without requiring manually labeled data.
