StarNet: Gradient-free Training of Deep Generative Models using Determined System of Linear Equations

01/03/2021
by   Amir Zadeh, et al.
11

In this paper we present an approach for training deep generative models solely based on solving determined systems of linear equations. A network that uses this approach, called a StarNet, has the following desirable properties: 1) training requires no gradient as solution to the system of linear equations is not stochastic, 2) is highly scalable when solving the system of linear equations w.r.t the latent codes, and similarly for the parameters of the model, and 3) it gives desirable least-square bounds for the estimation of latent codes and network parameters within each layer.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 9

research
10/31/2017

Flexible Prior Distributions for Deep Generative Models

We consider the problem of training generative models with deep neural n...
research
12/07/2012

Layer-wise learning of deep generative models

When using deep, multi-layered architectures to build generative models ...
research
12/09/2018

Physics-informed deep generative models

We consider the application of deep generative models in propagating unc...
research
12/18/2018

A Factorial Mixture Prior for Compositional Deep Generative Models

We assume that a high-dimensional datum, like an image, is a composition...
research
05/01/2020

Computing Absolute Free Energy with Deep Generative Models

Fast and accurate evaluation of free energy has broad applications from ...
research
04/15/2020

Eigendecomposition-Free Training of Deep Networks for Linear Least-Square Problems

Many classical Computer Vision problems, such as essential matrix comput...
research
01/06/2020

Granular Learning with Deep Generative Models using Highly Contaminated Data

An approach to utilize recent advances in deep generative models for ano...

Please sign up or login with your details

Forgot password? Click here to reset