Topology Optimization under Uncertainty using a Stochastic Gradient-based Approach

02/11/2019
by   Subhayan De, et al.
0

Topology optimization under uncertainty (TOuU) often defines objectives and constraints by statistical moments of geometric and physical quantities of interest. Most traditional TOuU methods use gradient-based optimization algorithms and rely on accurate estimates of the statistical moments and their gradients, e.g, via adjoint calculations. When the number of uncertain inputs is large or the quantities of interest exhibit large variability, a large number of adjoint (and/or forward) solves may be required to ensure the accuracy of these gradients. The optimization procedure itself often requires a large number of iterations, which may render the ToUU problem computationally expensive, if not infeasible, due to the overall large number of required adjoint (and/or forward) solves. To tackle this difficulty, we here propose an optimization approach that generates a stochastic approximation of the objective, constraints, and their gradients via a small number of adjoint (and/or forward) solves, per optimization iteration. A statistically independent (stochastic) approximation of these quantities is generated at each optimization iteration. This approach results in a total cost that is a small factor larger than that of the corresponding deterministic TO problem. First, we investigate the stochastic gradient descent (SGD) method and a number of its variants, which have been successfully applied to large-scale optimization problems for machine learning. Second, we study the use of stochastic approximation approach within conventional nonlinear programming methods, focusing on the Globally Convergent Method of Moving Asymptotes (GCMMA), a widely utilized scheme in deterministic TO. The performance of these algorithms is investigated with structural design problems utilizing Solid Isotropic Material with Penalization (SIMP) based optimization, as well as an explicit level set method.

READ FULL TEXT
research
10/01/2021

Bilevel stochastic methods for optimization and machine learning: Bilevel stochastic descent and DARTS

Two-level stochastic optimization formulations have become instrumental ...
research
11/14/2022

Learning to Optimize with Stochastic Dominance Constraints

In real-world decision-making, uncertainty is important yet difficult to...
research
11/30/2021

Adaptive Optimization with Examplewise Gradients

We propose a new, more general approach to the design of stochastic grad...
research
12/05/2013

Semi-Stochastic Gradient Descent Methods

In this paper we study the problem of minimizing the average of a large ...
research
09/12/2016

Less than a Single Pass: Stochastically Controlled Stochastic Gradient Method

We develop and analyze a procedure for gradient-based optimization that ...
research
09/29/2020

Projection-Free Adaptive Gradients for Large-Scale Optimization

The complexity in large-scale optimization can lie in both handling the ...
research
04/09/2019

On the Adaptivity of Stochastic Gradient-Based Optimization

Stochastic-gradient-based optimization has been a core enabling methodol...

Please sign up or login with your details

Forgot password? Click here to reset