One-Shot Decision-Making with and without Surrogates

by   Jakob Bossek, et al.

One-shot decision making is required in situations in which we can evaluate a fixed number of solution candidates but do not have any possibility for further, adaptive sampling. Such settings are frequently encountered in neural network design, hyper-parameter optimization, and many simulation-based real-world optimization tasks, in which evaluations are costly and time sparse. It seems intuitive that well-distributed samples should be more meaningful in one-shot decision making settings than uniform or grid-based samples, since they show a better coverage of the decision space. In practice, quasi-random designs such as Latin Hypercube Samples and low-discrepancy point sets form indeed the state of the art, as confirmed by a number of recent studies and competitions. In this work we take a closer look into the correlation between the distribution of the quasi-random designs and their performance in one-shot decision making tasks, with the goal to investigate whether the assumed correlation between uniform distribution and performance can be confirmed. We study three different decision tasks: classic one-shot optimization (only the best sample matters), one-shot optimization with surrogates (allowing to use surrogate models for selecting a design that need not necessarily be one of the evaluated samples), and one-shot regression (i.e., function approximation, with minimization of mean squared error as objective). Our results confirm an advantage of low-discrepancy designs for all three settings. The overall correlation, however, is rather weak. We complement our study by evolving problem-specific samples that show significantly better performance for the regression task than the standard approaches based on low-discrepancy sequences, giving strong indication that significant performance gains over state-of-the-art one-shot sampling techniques are possible.


SF-SFD: Stochastic Optimization of Fourier Coefficients to Generate Space-Filling Designs

Due to the curse of dimensionality, it is often prohibitively expensive ...

Controlled Random Search Improves Sample Mining and Hyper-Parameter Optimization

A common challenge in machine learning and related fields is the need to...

Effective Reinforcement Learning through Evolutionary Surrogate-Assisted Prescription

There is now significant historical data available on decision making in...

Discrepancy-based Inference for Intractable Generative Models using Quasi-Monte Carlo

Intractable generative models are models for which the likelihood is una...

Is a Transformed Low Discrepancy Design Also Low Discrepancy?

Experimental designs intended to match arbitrary target distributions ar...

Convergence Analysis of Stochastic Kriging-Assisted Simulation with Random Covariates

We consider performing simulation experiments in the presence of covaria...

Please sign up or login with your details

Forgot password? Click here to reset