Improving approximate RPCA with a k-sparsity prior

12/29/2014
by   Maximilian Karl, et al.
0

A process centric view of robust PCA (RPCA) allows its fast approximate implementation based on a special form o a deep neural network with weights shared across all layers. However, empirically this fast approximation to RPCA fails to find representations that are parsemonious. We resolve these bad local minima by relaxing the elementwise L1 and L2 priors and instead utilize a structure inducing k-sparsity prior. In a discriminative classification task the newly learned representations outperform these from the original approximate RPCA formulation significantly.

READ FULL TEXT
research
01/30/2019

Blurred Images Lead to Bad Local Minima

Blurred Images Lead to Bad Local Minima...
research
07/09/2020

Maximum-and-Concatenation Networks

While successful in many fields, deep neural networks (DNNs) still suffe...
research
03/04/2022

Sparsity-Inducing Categorical Prior Improves Robustness of the Information Bottleneck

The information bottleneck framework provides a systematic approach to l...
research
07/15/2022

Sparse Relational Reasoning with Object-Centric Representations

We investigate the composability of soft-rules learned by relational neu...
research
03/02/2020

Fiedler Regularization: Learning Neural Networks with Graph Sparsity

We introduce a novel regularization approach for deep learning that inco...
research
09/27/2019

Fast Fixed Dimension L2-Subspace Embeddings of Arbitrary Accuracy, With Application to L1 and L2 Tasks

We give a fast oblivious L2-embedding of A∈R^n x d to B∈R^r x d satisfyi...
research
10/26/2021

Defensive Tensorization

We propose defensive tensorization, an adversarial defence technique tha...

Please sign up or login with your details

Forgot password? Click here to reset