Faster Uncertainty Quantification for Inverse Problems with Conditional Normalizing Flows

07/15/2020
by   Ali Siahkoohi, et al.
0

In inverse problems, we often have access to data consisting of paired samples (x,y)∼ p_X,Y(x,y) where y are partial observations of a physical system, and x represents the unknowns of the problem. Under these circumstances, we can employ supervised training to learn a solution x and its uncertainty from the observations y. We refer to this problem as the "supervised" case. However, the data y∼ p_Y(y) collected at one point could be distributed differently than observations y'∼ p_Y'(y'), relevant for a current set of problems. In the context of Bayesian inference, we propose a two-step scheme, which makes use of normalizing flows and joint data to train a conditional generator q_θ(x|y) to approximate the target posterior density p_X|Y(x|y). Additionally, this preliminary phase provides a density function q_θ(x|y), which can be recast as a prior for the "unsupervised" problem, e.g. when only the observations y'∼ p_Y'(y'), a likelihood model y'|x, and a prior on x' are known. We then train another invertible generator with output density q'_ϕ(x|y') specifically for y', allowing us to sample from the posterior p_X|Y'(x|y'). We present some synthetic results that demonstrate considerable training speedup when reusing the pretrained network q_θ(x|y') as a warm start or preconditioning for approximating p_X|Y'(x|y'), instead of learning from scratch. This training modality can be interpreted as an instance of transfer learning. This result is particularly relevant for large-scale inverse problems that employ expensive numerical simulations.

READ FULL TEXT

page 4

page 5

page 7

page 8

research
02/15/2022

The efficacy and generalizability of conditional GANs for posterior inference in physics-based inverse problems

In this work, we train conditional Wasserstein generative adversarial ne...
research
04/17/2023

Goal-oriented Uncertainty Quantification for Inverse Problems via Variational Encoder-Decoder Networks

In this work, we describe a new approach that uses variational encoder-d...
research
11/30/2022

Proximal Residual Flows for Bayesian Inverse Problems

Normalizing flows are a powerful tool for generative modelling, density ...
research
02/20/2021

Trumpets: Injective Flows for Inference and Inverse Problems

We propose injective generative models called Trumpets that generalize i...
research
01/11/2021

Preconditioned training of normalizing flows for variational inference in inverse problems

Obtaining samples from the posterior distribution of inverse problems wi...
research
11/05/2019

A Method to Model Conditional Distributions with Normalizing Flows

In this work, we investigate the use of normalizing flows to model condi...
research
09/08/2021

Uncertainty Quantification and Experimental Design for large-scale linear Inverse Problems under Gaussian Process Priors

We consider the use of Gaussian process (GP) priors for solving inverse ...

Please sign up or login with your details

Forgot password? Click here to reset