HIDRA: Head Initialization across Dynamic targets for Robust Architectures

by   Rafael Rego Drumond, et al.

The performance of gradient-based optimization strategies depends heavily on the initial weights of the parametric model. Recent works show that there exist weight initializations from which optimization procedures can find the task-specific parameters faster than from uniformly random initializations, and that such a weight initialization can be learned by optimizing a specific model architecture across similar tasks via MAML (Model-Agnostic Meta-Learning). Current methods are limited to populations of classification tasks that share the same number of classes due to the static model architectures used during meta-learning. In this paper, we present HIDRA, a meta-learning approach that enables training and evaluating across tasks with any number of target variables. We show that Model-Agnostic Meta-Learning trains a distribution for all the neurons in the output layer and a specific weight initialization for the ones in the hidden layers. HIDRA explores this by learning one master neuron which is used to initialize any number of output neurons for a new task. Extensive experiments on the Miniimagenet and Omniglot data sets demonstrate that HIDRA improves over standard approaches while generalizing to tasks with any number of target variables. Moreover, our approach is shown to robustify low-capacity models in learning across complex tasks with a high number of classes for which regular MAML fails to learn any feasible initialization.


Chameleon: Learning Model Initializations Across Tasks With Different Schemas

Parametric models, and particularly neural networks, require weight init...

Meta-learning the Learning Trends Shared Across Tasks

Meta-learning stands for 'learning to learn' such that generalization to...

On the Importance of Attention in Meta-Learning for Few-Shot Text Classification

Current deep learning based text classification methods are limited by t...

Meta-Learning with Adaptive Layerwise Metric and Subspace

Recent advances in meta-learning demonstrate that deep representations c...

Robust MAML: Prioritization task buffer with adaptive learning process for model-agnostic meta-learning

Model agnostic meta-learning (MAML) is a popular state-of-the-art meta-l...

Adaptable Text Matching via Meta-Weight Regulator

Neural text matching models have been used in a range of applications su...

Analytic Network Learning

Based on the property that solving the system of linear matrix equations...

Please sign up or login with your details

Forgot password? Click here to reset