Sample Complexity Bounds for Learning High-dimensional Simplices in Noisy Regimes

09/09/2022
by   Amir Hossein Saberi, et al.
0

In this paper, we propose a sample complexity bound for learning a simplex from noisy samples. A dataset of size n is given which includes i.i.d. samples drawn from a uniform distribution over an unknown arbitrary simplex in ℝ^K, where samples are assumed to be corrupted by an additive Gaussian noise of an arbitrary magnitude. We propose a strategy which outputs a simplex having, with high probability, a total variation distance of ϵ + O(SNR^-1) from the true simplex, for any ϵ>0. We prove that to arrive this close to the true simplex, it is sufficient to have n≥Õ(K^2/ϵ^2) samples. Here, SNR stands for the signal-to-noise ratio which can be viewed as the ratio of the diameter of the simplex to the standard deviation of the noise. Our proofs are based on recent advancements in sample compression techniques, which have already shown promises in deriving tight bounds for density estimation in high-dimensional Gaussian mixture models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2017

Parameter Estimation in Gaussian Mixture Models with Malicious Noise, without Balanced Mixing Coefficients

We consider the problem of estimating means of two Gaussians in a 2-Gaus...
research
01/09/2020

Gaussian Approximation of Quantization Error for Estimation from Compressed Data

We consider the distributional connection between the lossy compressed r...
research
04/18/2016

Learning Sparse Additive Models with Interactions in High Dimensions

A function f: R^d →R is referred to as a Sparse Additive Model (SPAM), i...
research
06/06/2022

Mean Estimation in High-Dimensional Binary Markov Gaussian Mixture Models

We consider a high-dimensional mean estimation problem over a binary hid...
research
05/02/2016

Algorithms for Learning Sparse Additive Models with Interactions in High Dimensions

A function f: R^d →R is a Sparse Additive Model (SPAM), if it is of the ...
research
11/07/2012

Blind Signal Separation in the Presence of Gaussian Noise

A prototypical blind signal separation problem is the so-called cocktail...
research
10/14/2017

Agnostic Distribution Learning via Compression

We study sample-efficient distribution learning, where a learner is give...

Please sign up or login with your details

Forgot password? Click here to reset