Kalman filters as the steady-state solution of gradient descent on variational free energy
The Kalman filter is an algorithm for the estimation of hidden variables in dynamical systems under linear Gauss-Markov assumptions with widespread applications across different fields. Recently, its Bayesian interpretation has received a growing amount of attention especially in neuroscience, robotics and machine learning. In neuroscience, in particular, models of perception and control under the banners of predictive coding, optimal feedback control, active inference and more generally the so-called Bayesian brain hypothesis, have all heavily relied on ideas behind the Kalman filter. Active inference, an algorithmic theory based on the free energy principle, specifically builds on approximate Bayesian inference methods proposing a variational account of neural computation and behaviour in terms of gradients of variational free energy. Using this ambitious framework, several works have discussed different possible relations between free energy minimisation and standard Kalman filters. With a few exceptions, however, such relations point at a mere qualitative resemblance or are built on a set of very diverse comparisons based on purported differences between free energy minimisation and Kalman filtering. In this work, we present a straightforward derivation of Kalman filters consistent with active inference via a variational treatment of free energy minimisation in terms of gradient descent. The approach considered here offers a more direct link between models of neural dynamics as gradient descent and standard accounts of perception and decision making based on probabilistic inference, further bridging the gap between hypotheses about neural implementation and computational principles in brain and behavioural sciences.
READ FULL TEXT