Does MAML really want feature reuse only?
Meta-learning, the effort to solve new tasks with only a few samples, has attracted great attention in recent years. Model Agnostic Meta-Learning (MAML) is one of the most representative gradient-based meta-learning algorithms. MAML learns new tasks with a few data samples with inner updates from a meta-initialization point and learns the meta-initialization parameters with outer updates. Recently, it has been hypothesized that feature reuse, which makes little change in efficient representations, is the dominant factor in the performance of meta-initialized model through MAML rather than rapid learning, which makes a big change in representations. In this work, we propose a novel meta-learning algorithm, coined as BOIL (Body Only update in Inner Loop), that updates only the body (extractor) of the model and freezes the head (classifier) of the model during inner loop updates. The BOIL algorithm thus heavily relies on rapid learning. Note that BOIL is the opposite direction to the hypothesis that feature reuse is more efficient than rapid learning. We validate the BOIL algorithm on various data sets and show significant performance improvement over MAML. The results imply that rapid learning in gradient-based meta-learning approaches is necessary.
READ FULL TEXT