On and beyond Total Variation regularisation in imaging: the role of space variance
Over the last 30 years a plethora of variational regularisation models for image reconstruction has been proposed and thoroughly inspected by the applied mathematics community. Among them, the pioneering prototype often taught and learned in basic courses in mathematical image processing is the celebrated Rudin-Osher-Fatemi (ROF) model <cit.> which relies on the minimisation of the edge-preserving Total Variation (TV) semi-norm as regularisation term. Despite its (often limiting) simplicity, this model is still very much employed in many applications and used as a benchmark for assessing the performance of modern learning-based image reconstruction approaches, thanks to its thorough analytical and numerical understanding. Among the many extensions to TV proposed over the years, a large class is based on the concept of space variance. Space-variant models can indeed overcome the intrinsic inability of TV to describe local features (strength, sharpness, directionality) by means of an adaptive mathematical modelling which accommodates local regularisation weighting, variable smoothness and anisotropy. Those ideas can further be cast in the flexible Bayesian framework of generalised Gaussian distributions and combined with maximum likelihood and hierarchical optimisation approaches for efficient hyper-parameter estimation. In this work, we review and connect the major contributions in the field of space-variant TV-type image reconstruction models, focusing, in particular, on their Bayesian interpretation which paves the way to new exciting and unexplored research directions.
READ FULL TEXT