Ignorance is Bliss: Robust Control via Information Gating

03/10/2023
by   Manan Tomar, et al.
0

Informational parsimony – i.e., using the minimal information required for a task, – provides a useful inductive bias for learning representations that achieve better generalization by being robust to noise and spurious correlations. We propose information gating in the pixel space as a way to learn more parsimonious representations. Information gating works by learning masks that capture only the minimal information required to solve a given task. Intuitively, our models learn to identify which visual cues actually matter for a given task. We gate information using a differentiable parameterization of the signal-to-noise ratio, which can be applied to arbitrary values in a network, e.g. masking out pixels at the input layer. We apply our approach, which we call InfoGating, to various objectives such as: multi-step forward and inverse dynamics, Q-learning, behavior cloning, and standard self-supervised tasks. Our experiments show that learning to identify and use minimal information can improve generalization in downstream tasks – e.g., policies based on info-gated images are considerably more robust to distracting/irrelevant visual features.

READ FULL TEXT

page 1

page 8

page 9

page 10

research
06/05/2021

Conditional Contrastive Learning: Removing Undesirable Information in Self-Supervised Representations

Self-supervised learning is a form of unsupervised learning that leverag...
research
05/29/2023

MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations

Contrastive self-supervised learning has gained attention for its abilit...
research
07/19/2021

Playful Interactions for Representation Learning

One of the key challenges in visual imitation learning is collecting lar...
research
07/07/2023

SpawnNet: Learning Generalizable Visuomotor Skills from Pre-trained Networks

The existing internet-scale image and video datasets cover a wide range ...
research
01/10/2022

Reproducing BowNet: Learning Representations by Predicting Bags of Visual Words

This work aims to reproduce results from the CVPR 2020 paper by Gidaris ...
research
06/20/2020

Embodied Self-supervised Learning by Coordinated Sampling and Training

Self-supervised learning can significantly improve the performance of do...
research
01/02/2023

SIRL: Similarity-based Implicit Representation Learning

When robots learn reward functions using high capacity models that take ...

Please sign up or login with your details

Forgot password? Click here to reset