Log-sum-exp neural networks and posynomial models for convex and log-log-convex data

by   Giuseppe C. Calafiore, et al.

We show that a one-layer feedforward neural network with exponential activation functions in the inner layer and logarithmic activation in the output neuron is a universal approximator of convex functions. Such a network represents a family of scaled log-sum exponential functions, here named LSET. The proof uses a dequantization argument from tropical geometry. Under a suitable exponential transformation LSE maps to a family of generalized posynomial functions GPOST, which we also show to be universal approximators for log-log-convex functions. The key feature of interest in the proposed approach is that, once a LSET network is trained on data, the resulting model is convex in the variables, which makes it readily amenable to efficient design based on convex optimization. Similarly, once a GPOST model is trained on data, it yields a posynomial model that can be efficiently optimized with respect to its variables by using Geometric Programming (GP). Many relevant phenomena in physics and engineering can indeed be modeled, either exactly or approximately, via convex or log-log-convex models. The proposed methodology is illustrated by two numerical examples in which LSET and GPOST models are used to first approximate data gathered from the simulations of two physical processes (the vibration from a vehicle suspension system, and the peak power generated by the combustion of propane), and to later optimize these models.


page 1

page 2

page 3

page 4


A Universal Approximation Result for Difference of log-sum-exp Neural Networks

We show that a neural network whose output is obtained as the difference...

Disciplined Geometric Programming

We introduce log-log convex programs, which are optimization problems wi...

A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks

We present the soft exponential activation function for artificial neura...

Parametrized Convex Universal Approximators for Decision-Making Problems

Parametrized max-affine (PMA) and parametrized log-sum-exp (PLSE) networ...

A Method of Sequential Log-Convex Programming for Engineering Design

A method of Sequential Log-Convex Programming (SLCP) is constructed that...

Deep equilibrium models as estimators for continuous latent variables

Principal Component Analysis (PCA) and its exponential family extensions...

Efficient Neural Network Analysis with Sum-of-Infeasibilities

Inspired by sum-of-infeasibilities methods in convex optimization, we pr...

Please sign up or login with your details

Forgot password? Click here to reset