Physarum Powered Differentiable Linear Programming Layers and Applications

by   Zihang Meng, et al.

Consider a learning algorithm, which involves an internal call to an optimization routine such as a generalized eigenvalue problem, a cone programming problem or even sorting. Integrating such a method as layers within a trainable deep network in a numerically stable way is not simple – for instance, only recently, strategies have emerged for eigendecomposition and differentiable sorting. We propose an efficient and differentiable solver for general linear programming problems which can be used in a plug and play manner within deep neural networks as a layer. Our development is inspired by a fascinating but not widely used link between dynamics of slime mold (physarum) and mathematical optimization schemes such as steepest descent. We describe our development and demonstrate the use of our solver in a video object segmentation task and meta-learning for few-shot learning. We review the relevant known results and provide a technical analysis describing its applicability for our use cases. Our solver performs comparably with a customized projected gradient descent method on the first task and outperforms the very recently proposed differentiable CVXPY solver on the second task. Experiments show that our solver converges quickly without the need for a feasible initial point. Interestingly, our scheme is easy to implement and can easily serve as layers whenever a learning procedure needs a fast approximate solution to a LP, within a larger network.


page 1

page 2

page 3

page 4


Differentiable Convex Optimization Layers

Recent work has shown how to embed differentiable optimization problems ...

Output Range Analysis for Deep Neural Networks

Deep neural networks (NN) are extensively used for machine learning task...

Meta-learning with differentiable closed-form solvers

Adapting deep networks to new concepts from few examples is extremely ch...

A Solver + Gradient Descent Training Algorithm for Deep Neural Networks

We present a novel hybrid algorithm for training Deep Neural Networks th...

SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver

Integrating logical reasoning within deep learning architectures has bee...

Speeding up Linear Programming using Randomized Linear Algebra

Linear programming (LP) is an extremely useful tool and has been success...

OptNet: Differentiable Optimization as a Layer in Neural Networks

This paper presents OptNet, a network architecture that integrates optim...

Please sign up or login with your details

Forgot password? Click here to reset