Learning (Very) Simple Generative Models Is Hard

05/31/2022
by   Sitan Chen, et al.
0

Motivated by the recent empirical successes of deep generative models, we study the computational complexity of the following unsupervised learning problem. For an unknown neural network F:ℝ^d→ℝ^d', let D be the distribution over ℝ^d' given by pushing the standard Gaussian 𝒩(0,Id_d) through F. Given i.i.d. samples from D, the goal is to output any distribution close to D in statistical distance. We show under the statistical query (SQ) model that no polynomial-time algorithm can solve this problem even when the output coordinates of F are one-hidden-layer ReLU networks with log(d) neurons. Previously, the best lower bounds for this problem simply followed from lower bounds for supervised learning and required at least two hidden layers and poly(d) neurons [Daniely-Vardi '21, Chen-Gollakota-Klivans-Meka '22]. The key ingredient in our proof is an ODE-based construction of a compactly supported, piecewise-linear function f with polynomially-bounded slopes such that the pushforward of 𝒩(0,1) under f matches all low-degree moments of 𝒩(0,1).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset