Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE

11/09/2020
by   Ding Zhou, et al.
13

The ability to record activities from hundreds of neurons simultaneously in the brain has placed an increasing demand for developing appropriate statistical techniques to analyze such data. Recently, deep generative models have been proposed to fit neural population responses. While these methods are flexible and expressive, the downside is that they can be difficult to interpret and identify. To address this problem, we propose a method that integrates key ingredients from latent models and traditional neural encoding models. Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder, which we adapt to make appropriate for neuroscience applications. Specifically, we propose to construct latent variable models of neural activity while simultaneously modeling the relation between the latent and task variables (non-neural variables, e.g. sensory, motor, and other externally observable states). The incorporation of task variables results in models that are not only more constrained, but also show qualitative improvements in interpretability and identifiability. We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex. We demonstrate that pi-VAE not only fits the data better, but also provides unexpected novel insights into the structure of the neural codes.

READ FULL TEXT

page 2

page 3

page 4

page 10

page 11

page 12

page 13

page 16

research
11/08/2018

Disentangling Latent Factors with Whitening

After the success of deep generative models in image generation tasks, l...
research
09/09/2021

Neural Latents Benchmark '21: Evaluating latent variable models of neural population activity

Advances in neural recording present increasing opportunities to study n...
research
12/06/2018

Embedding-reparameterization procedure for manifold-valued latent variables in generative models

Conventional prior for Variational Auto-Encoder (VAE) is a Gaussian dist...
research
02/17/2018

Interpretable VAEs for nonlinear group factor analysis

Deep generative models have recently yielded encouraging results in prod...
research
11/03/2021

Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

Meaningful and simplified representations of neural activity can yield i...
research
12/12/2017

Concept Formation and Dynamics of Repeated Inference in Deep Generative Models

Deep generative models are reported to be useful in broad applications i...
research
10/13/2019

Bayesian Neural Decoding Using A Diversity-Encouraging Latent Representation Learning Method

It is well established that temporal organization is critical to memory,...

Please sign up or login with your details

Forgot password? Click here to reset