Scalable Vector Gaussian Information Bottleneck

02/15/2021
by   Mohammad Mahdi Mahvari, et al.
0

In the context of statistical learning, the Information Bottleneck method seeks a right balance between accuracy and generalization capability through a suitable tradeoff between compression complexity, measured by minimum description length, and distortion evaluated under logarithmic loss measure. In this paper, we study a variation of the problem, called scalable information bottleneck, in which the encoder outputs multiple descriptions of the observation with increasingly richer features. The model, which is of successive-refinement type with degraded side information streams at the decoders, is motivated by some application scenarios that require varying levels of accuracy depending on the allowed (or targeted) level of complexity. We establish an analytic characterization of the optimal relevance-complexity region for vector Gaussian sources. Then, we derive a variational inference type algorithm for general sources with unknown distribution; and show means of parametrizing it using neural networks. Finally, we provide experimental results on the MNIST dataset which illustrate that the proposed method generalizes better to unseen data during the training phase.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2020

On the Relevance-Complexity Region of Scalable Information Bottleneck

The Information Bottleneck method is a learning technique that seeks a r...
research
07/11/2018

Distributed Variational Representation Learning

The problem of distributed representation learning is one in which multi...
research
01/31/2020

On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views

This tutorial paper focuses on the variants of the bottleneck problem ta...
research
04/05/2016

Collaborative Information Bottleneck

This paper investigates a multi-terminal source coding problem under a l...
research
04/24/2020

The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget

In many applications, it is desirable to extract only the relevant infor...
research
02/09/2022

Reducing Redundancy in the Bottleneck Representation of the Autoencoders

Autoencoders are a type of unsupervised neural networks, which can be us...

Please sign up or login with your details

Forgot password? Click here to reset