DeepAI AI Chat
Log In Sign Up

Learning the Parameters of Determinantal Point Process Kernels

by   Raja Hafiz Affandi, et al.

Determinantal point processes (DPPs) are well-suited for modeling repulsion and have proven useful in many applications where diversity is desired. While DPPs have many appealing properties, such as efficient sampling, learning the parameters of a DPP is still considered a difficult problem due to the non-convex nature of the likelihood function. In this paper, we propose using Bayesian methods to learn the DPP kernel parameters. These methods are applicable in large-scale and continuous DPP settings even when the exact form of the eigendecomposition is unknown. We demonstrate the utility of our DPP learning methods in studying the progression of diabetic neuropathy based on spatial distribution of nerve fibers, and in studying human perception of diversity in images.


page 1

page 2

page 3

page 4


Inference for determinantal point processes without spectral knowledge

Determinantal point processes (DPPs) are point process models that natur...

Adaptive Sampling for Stochastic Risk-Averse Learning

We consider the problem of training machine learning models in a risk-av...

Approximate Inference in Continuous Determinantal Point Processes

Determinantal point processes (DPPs) are random point processes well-sui...

Diversity sampling is an implicit regularization for kernel methods

Kernel methods have achieved very good performance on large scale regres...

Expectation-Maximization for Learning Determinantal Point Processes

A determinantal point process (DPP) is a probabilistic model of set dive...

Determinantal Point Processes for Coresets

When one is faced with a dataset too large to be used all at once, an ob...