DeepAI AI Chat
Log In Sign Up

Zonotope hit-and-run for efficient sampling from projection DPPs

05/30/2017
by   Guillaume Gautier, et al.
0

Determinantal point processes (DPPs) are distributions over sets of items that model diversity using kernels. Their applications in machine learning include summary extraction and recommendation systems. Yet, the cost of sampling from a DPP is prohibitive in large-scale applications, which has triggered an effort towards efficient approximate samplers. We build a novel MCMC sampler that combines ideas from combinatorial geometry, linear programming, and Monte Carlo methods to sample from DPPs with a fixed sample cardinality, also called projection DPPs. Our sampler leverages the ability of the hit-and-run MCMC kernel to efficiently move across convex bodies. Previous theoretical results yield a fast mixing time of our chain when targeting a distribution that is close to a projection DPP, but not a DPP in general. Our empirical results demonstrate that this extends to sampling projection DPPs, i.e., our sampler is more sample-efficient than previous approaches which in turn translates to faster convergence when dealing with costly-to-evaluate functions, such as summary extraction in our experiments.

READ FULL TEXT

page 5

page 12

09/10/2018

The Coordinate Sampler: A Non-Reversible Gibbs-like MCMC Sampler

In this article, we derive a novel non-reversible, continuous-time Marko...
07/09/2023

SpreadNUTS – Moderate Dynamic Extension of Paths for No-U-Turn Sampling Partitioning Visited Regions

Markov chain Monte Carlo (MCMC) methods have existed for a long time and...
09/08/2021

LSB: Local Self-Balancing MCMC in Discrete Spaces

Markov Chain Monte Carlo (MCMC) methods are promising solutions to sampl...
12/23/2022

Design of Hamiltonian Monte Carlo for perfect simulation of general continuous distributions

Hamiltonian Monte Carlo (HMC) is an efficient method of simulating smoot...
07/08/2020

Learning from DPPs via Sampling: Beyond HKPV and symmetry

Determinantal point processes (DPPs) have become a significant tool for ...
06/08/2023

Entropy-based Training Methods for Scalable Neural Implicit Sampler

Efficiently sampling from un-normalized target distributions is a fundam...
10/24/2020

On Testing of Samplers

Given a set of items ℱ and a weight function 𝚠𝚝: ℱ↦ (0,1), the problem o...