Discrete Infomax Codes for Meta-Learning

05/28/2019
by   Yoonho Lee, et al.
0

Learning compact discrete representations of data is itself a key task in addition to facilitating subsequent processing. It is also relevant to meta-learning since a latent representation shared across relevant tasks enables a model to adapt to new tasks quickly. In this paper, we present a method for learning a stochastic encoder that yields discrete p-way codes of length d by maximizing the mutual information between representations and labels. We show that previous loss functions for deep metric learning are approximations to this information-theoretic objective function. Our model, Discrete InfoMax Codes (DIMCO), learns to produce a short representation of data that can be used to classify classes with few labeled examples. Our analysis shows that using shorter codes reduces overfitting in the context of few-shot classification. Experiments show that DIMCO requires less memory (i.e., code length) for performance similar to previous methods and that our method is particularly effective when the training dataset is small.

READ FULL TEXT

page 7

page 16

research
09/07/2020

Information Theoretic Meta Learning with Gaussian Processes

We formulate meta learning using information theoretic concepts such as ...
research
01/24/2021

Meta-Regularization by Enforcing Mutual-Exclusiveness

Meta-learning models have two objectives. First, they need to be able to...
research
04/11/2019

MxML: Mixture of Meta-Learners for Few-Shot Classification

A meta-model is trained on a distribution of similar tasks such that it ...
research
08/09/2021

The Role of Global Labels in Few-Shot Classification and How to Infer Them

Few-shot learning (FSL) is a central problem in meta-learning, where lea...
research
03/02/2023

Model agnostic methods meta-learn despite misspecifications

Due to its empirical success on few shot classification and reinforcemen...
research
12/05/2019

MetaFun: Meta-Learning with Iterative Functional Updates

Few-shot supervised learning leverages experience from previous learning...

Please sign up or login with your details

Forgot password? Click here to reset