Is a Transformed Low Discrepancy Design Also Low Discrepancy?

by   Yiou Li, et al.

Experimental designs intended to match arbitrary target distributions are typically constructed via a variable transformation of a uniform experimental design. The inverse distribution function is one such transformation. The discrepancy is a measure of how well the empirical distribution of any design matches its target distribution. This chapter addresses the question of whether a variable transformation of a low discrepancy uniform design yields a low discrepancy design for the desired target distribution. The answer depends on the two kernel functions used to define the respective discrepancies. If these kernels satisfy certain conditions, then the answer is yes. However, these conditions may be undesirable for practical reasons. In such a case, the transformation of a low discrepancy uniform design may yield a design with a large discrepancy. We illustrate how this may occur. We also suggest some remedies. One remedy is to ensure that the original uniform design has optimal one-dimensional projection, but this remedy works best if the design is dense, or in other words, the ratio of sample size divided by the dimension of the random variable is relatively large. Another remedy is to use the transformed design as the input to a coordinate-exchange algorithm that optimizes the desired discrepancy, and this works for both dense or sparse designs. The effectiveness of these two remedies is illustrated via simulation.


page 1

page 2

page 3

page 4


A practical algorithm to calculate Cap Discrepancy

Uniform distribution of the points has been of interest to researchers f...

Low-Discrepancy Points via Energetic Variational Inference

In this paper, we propose a deterministic variational inference approach...

The curse of dimensionality for the L_p-discrepancy with finite p

The L_p-discrepancy is a quantitative measure for the irregularity of di...

An enumerative formula for the spherical cap discrepancy

The spherical cap discrepancy is a widely used measure for how uniformly...

Using Perturbation to Improve Goodness-of-Fit Tests based on Kernelized Stein Discrepancy

Kernelized Stein discrepancy (KSD) is a score-based discrepancy widely u...

Newcomb-Benford's law as a fast ersatz of discrepancy measures

Thanks to the increasing availability in computing power, high-dimension...

One-Shot Decision-Making with and without Surrogates

One-shot decision making is required in situations in which we can evalu...

Please sign up or login with your details

Forgot password? Click here to reset