Taylor approximation for chance constrained optimization problems governed by partial differential equations with high-dimensional random parameters

by   Peng Chen, et al.

We propose a fast and scalable optimization method to solve chance or probabilistic constrained optimization problems governed by partial differential equations (PDEs) with high-dimensional random parameters. To address the critical computational challenges of expensive PDE solution and high-dimensional uncertainty, we construct surrogates of the constraint function by Taylor approximation, which relies on efficient computation of the derivatives, low rank approximation of the Hessian, and a randomized algorithm for eigenvalue decomposition. To tackle the difficulty of the non-differentiability of the inequality chance constraint, we use a smooth approximation of the discontinuous indicator function involved in the chance constraint, and apply a penalty method to transform the inequality constrained optimization problem to an unconstrained one. Moreover, we design a gradient-based optimization scheme that gradually increases smoothing and penalty parameters to achieve convergence, for which we present an efficient computation of the gradient of the approximate cost functional by the Taylor approximation. Based on numerical experiments for a problem in optimal groundwater management, we demonstrate the accuracy of the Taylor approximation, its ability to greatly accelerate constraint evaluations, the convergence of the continuation optimization scheme, and the scalability of the proposed method in terms of the number of PDE solves with increasing random parameter dimension from one thousand to hundreds of thousands.


page 18

page 19

page 21


State-constrained Optimization Problems under Uncertainty: A Tensor Train Approach

We propose an algorithm to solve optimization problems constrained by pa...

Efficient PDE-Constrained optimization under high-dimensional uncertainty using derivative-informed neural operators

We propose a novel machine learning framework for solving optimization p...

TTRISK: Tensor Train Decomposition Algorithm for Risk Averse Optimization

This article develops a new algorithm named TTRISK to solve high-dimensi...

A fast and scalable computational framework for large-scale and high-dimensional Bayesian optimal experimental design

We develop a fast and scalable computational framework to solve large-sc...

Discretization and Machine Learning Approximation of BSDEs with a Constraint on the Gains-Process

We study the approximation of backward stochastic differential equations...

Implementing a smooth exact penalty function for general constrained nonlinear optimization

We build upon Estrin et al. (2019) to develop a general constrained nonl...

A globally convergent method to accelerate large-scale optimization using on-the-fly model hyperreduction: application to shape optimization

We present a numerical method to efficiently solve optimization problems...

Please sign up or login with your details

Forgot password? Click here to reset