PMaF: Deep Declarative Layers for Principal Matrix Features

by   Zhiwei Xu, et al.
Australian National University

We explore two differentiable deep declarative layers, namely least squares on sphere (LESS) and implicit eigen decomposition (IED), for learning the principal matrix features (PMaF). This can be used to represent data features with a low-dimension vector containing dominant information from a high-dimension matrix. We first solve the problems with iterative optimization in the forward pass and then backpropagate the solution for implicit gradients under a bi-level optimization framework. Particularly, adaptive descent steps with the backtracking line search method and descent decay in the tangent space are studied to improve the forward pass efficiency of LESS. Meanwhile, exploited data structures are used to greatly reduce the computational complexity in the backward pass of LESS and IED. Empirically, we demonstrate the superiority of our layers over the off-the-shelf baselines by comparing the solution optimality and computational requirements.


page 14

page 15


SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models

In recent years, implicit deep learning has emerged as a method to incre...

Differentiable Frank-Wolfe Optimization Layer

Differentiable optimization has received a significant amount of attenti...

Exploiting Problem Structure in Deep Declarative Networks: Two Case Studies

Deep declarative networks and other recent related works have shown how ...

Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation

Differentiable planning promises end-to-end differentiability and adapti...

Differentiable Particle Filtering without Modifying the Forward Pass

In recent years particle filters have being used as components in system...

One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares

While deep neural networks are capable of achieving state-of-the-art per...

Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation

Iterative refinement – start with a random guess, then iteratively impro...

Please sign up or login with your details

Forgot password? Click here to reset