Meta-SAGE: Scale Meta-Learning Scheduled Adaptation with Guided Exploration for Mitigating Scale Shift on Combinatorial Optimization

06/05/2023
by   Jiwoo Son, et al.
0

This paper proposes Meta-SAGE, a novel approach for improving the scalability of deep reinforcement learning models for combinatorial optimization (CO) tasks. Our method adapts pre-trained models to larger-scale problems in test time by suggesting two components: a scale meta-learner (SML) and scheduled adaptation with guided exploration (SAGE). First, SML transforms the context embedding for subsequent adaptation of SAGE based on scale information. Then, SAGE adjusts the model parameters dedicated to the context embedding for a specific instance. SAGE introduces locality bias, which encourages selecting nearby locations to determine the next location. The locality bias gradually decays as the model is adapted to the target instance. Results show that Meta-SAGE outperforms previous adaptation methods and significantly improves scalability in representative CO tasks. Our source code is available at https://github.com/kaist-silab/meta-sage

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2022

DIMES: A Differentiable Meta Solver for Combinatorial Optimization Problems

Recently, deep reinforcement learning (DRL) models have shown promising ...
research
10/08/2018

CAML: Fast Context Adaptation via Meta-Learning

We propose CAML, a meta-learning method for fast adaptation that partiti...
research
10/16/2018

ProMP: Proximal Meta-Policy Search

Credit assignment in Meta-reinforcement learning (Meta-RL) is still poor...
research
05/06/2021

Meta-Learning-based Deep Reinforcement Learning for Multiobjective Optimization Problems

Deep reinforcement learning (DRL) has recently shown its success in tack...
research
08/24/2021

Adaptation-Agnostic Meta-Training

Many meta-learning algorithms can be formulated into an interleaved proc...
research
02/01/2023

Efficient Meta-Learning via Error-based Context Pruning for Implicit Neural Representations

We introduce an efficient optimization-based meta-learning technique for...
research
03/26/2020

On-the-Fly Adaptation of Source Code Models using Meta-Learning

The ability to adapt to unseen, local contexts is an important challenge...

Please sign up or login with your details

Forgot password? Click here to reset