Extrapolation and learning equations

10/10/2016
by   Georg Martius, et al.
0

In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified.

READ FULL TEXT
research
06/19/2018

Learning Equations for Extrapolation and Control

We present an approach to identify concise equations from data using a s...
research
05/05/2023

Data-driven and Physics Informed Modelling of Chinese Hamster Ovary Cell Bioreactors

Fed-batch culture is an established operation mode for the production of...
research
02/28/2022

An Analytical Approach to Compute the Exact Preimage of Feed-Forward Neural Networks

Neural networks are a convenient way to automatically fit functions that...
research
12/12/2020

Learning Symbolic Expressions via Gumbel-Max Equation Learner Network

Although modern machine learning, in particular deep learning, has achie...
research
06/24/2021

FF-NSL: Feed-Forward Neural-Symbolic Learner

Inductive Logic Programming (ILP) aims to learn generalised, interpretab...
research
04/06/2020

Gradient-Based Training and Pruning of Radial Basis Function Networks with an Application in Materials Physics

Many applications, especially in physics and other sciences, call for ea...

Please sign up or login with your details

Forgot password? Click here to reset