Disentangled (Un)Controllable Features

10/31/2022
by   Jacob E. Kooi, et al.
0

In the context of MDPs with high-dimensional states, reinforcement learning can achieve better results when using a compressed, low-dimensional representation of the original input space. A variety of learning objectives have therefore been used to learn useful representations. However, these representations usually lack interpretability of the different features. We propose a representation learning algorithm that is able to disentangle latent features into a controllable and an uncontrollable part. The resulting representations are easily interpretable and can be used for learning and planning efficiently by leveraging the specific properties of the two parts. To highlight the benefits of the approach, the disentangling properties of the algorithm are illustrated in three different environments.

READ FULL TEXT

page 2

page 3

page 9

research
07/04/2021

Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics

Deep Reinforcement Learning has shown its ability in solving complicated...
research
09/08/2021

Desiderata for Representation Learning: A Causal Perspective

Representation learning constructs low-dimensional representations to su...
research
11/16/2020

Distilling a Hierarchical Policy for Planning and Control via Representation and Reinforcement Learning

We present a hierarchical planning and control framework that enables an...
research
10/26/2020

Tailoring Representations to Different Requirements

Designing the representation languages for the input and output of a lea...
research
03/30/2023

Learning in Factored Domains with Information-Constrained Visual Representations

Humans learn quickly even in tasks that contain complex visual informati...
research
03/06/2020

Semi-Supervised StyleGAN for Disentanglement Learning

Disentanglement learning is crucial for obtaining disentangled represent...
research
03/27/2018

Mittens: An Extension of GloVe for Learning Domain-Specialized Representations

We present a simple extension of the GloVe representation learning model...

Please sign up or login with your details

Forgot password? Click here to reset