Meta-learning is a machine learning paradigm where a model is explicitly trained to adapt quickly to new tasks, often with only a few examples. This is achieved by exposing the model to a distribution of related tasks during a meta-training phase, allowing it to internalize a general learning strategy. The core objective is to improve sample efficiency and enable few-shot learning, where the model performs well on novel tasks after seeing only a handful of labeled instances. Common approaches include model-agnostic meta-learning (MAML), which learns an optimal parameter initialization, and metric-based methods like prototypical networks.
