Accelerating numerical methods by gradient-based meta-solving

06/17/2022
by   Sohei Arisaka, et al.
0

In science and engineering applications, it is often required to solve similar computational problems repeatedly. In such cases, we can utilize the data from previously solved problem instances to improve the efficiency of finding subsequent solutions. This offers a unique opportunity to combine machine learning (in particular, meta-learning) and scientific computing. To date, a variety of such domain-specific methods have been proposed in the literature, but a generic approach for designing these methods remains under-explored. In this paper, we tackle this issue by formulating a general framework to describe these problems, and propose a gradient-based algorithm to solve them in a unified way. As an illustration of this approach, we study the adaptive generation of parameters for iterative solvers to accelerate the solution of differential equations. We demonstrate the performance and versatility of our method through theoretical analysis and numerical experiments, including applications to incompressible flow simulations and an inverse problem of parameter estimation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2021

Personalized Algorithm Generation: A Case Study in Meta-Learning ODE Integrators

We study the meta-learning of numerical algorithms for scientific comput...
research
02/14/2021

Multi-Objective Meta Learning

Meta learning with multiple objectives can be formulated as a Multi-Obje...
research
06/18/2023

Meta-Learning for Airflow Simulations with Graph Neural Networks

The field of numerical simulation is of significant importance for the d...
research
02/27/2019

Provable Guarantees for Gradient-Based Meta-Learning

We study the problem of meta-learning through the lens of online convex ...
research
02/09/2023

Learning to Select Pivotal Samples for Meta Re-weighting

Sample re-weighting strategies provide a promising mechanism to deal wit...
research
04/30/2023

META-SMGO-Δ: similarity as a prior in black-box optimization

When solving global optimization problems in practice, one often ends up...
research
11/22/2022

A Recursively Recurrent Neural Network (R2N2) Architecture for Learning Iterative Algorithms

Meta-learning of numerical algorithms for a given task consist of the da...

Please sign up or login with your details

Forgot password? Click here to reset