Meta-learning is a framework where a model, often called a meta-learner, is explicitly trained to adapt quickly. Instead of learning a single task, it learns from a distribution of related tasks during a meta-training phase. The goal is to optimize the model's initial parameters or its learning algorithm so that, when presented with a new task and a small support set of examples, it can make effective predictions after only a few gradient steps or through a learned adaptation procedure. This process is formalized as a bi-level optimization problem.
