Learning Log-Determinant Divergences for Positive Definite Matrices

04/13/2021
by   Anoop Cherian, et al.
0

Representations in the form of Symmetric Positive Definite (SPD) matrices have been popularized in a variety of visual learning applications due to their demonstrated ability to capture rich second-order statistics of visual data. There exist several similarity measures for comparing SPD matrices with documented benefits. However, selecting an appropriate measure for a given problem remains a challenge and in most cases, is the result of a trial-and-error process. In this paper, we propose to learn similarity measures in a data-driven manner. To this end, we capitalize on the αβ-log-det divergence, which is a meta-divergence parametrized by scalars αand β, subsuming a wide family of popular information divergences on SPD matrices for distinct and discrete values of these parameters. Our key idea is to cast these parameters in a continuum and learn them from data. We systematically extend this idea to learn vector-valued parameters, thereby increasing the expressiveness of the underlying non-linear measure. We conjoin the divergence learning problem with several standard tasks in machine learning, including supervised discriminative dictionary learning and unsupervised SPD matrix clustering. We present Riemannian gradient descent schemes for optimizing our formulations efficiently, and show the usefulness of our method on eight standard computer vision tasks.

READ FULL TEXT
research
08/05/2017

Learning Discriminative Alpha-Beta-divergence for Positive Definite Matrices (Extended Version)

Symmetric positive definite (SPD) matrices are useful for capturing seco...
research
07/10/2015

Riemannian Dictionary Learning and Sparse Coding for Positive Definite Matrices

Data encoded as symmetric positive definite (SPD) matrices frequently ar...
research
10/08/2011

Positive definite matrices and the S-divergence

Positive definite matrices abound in a dazzling variety of applications....
research
08/15/2016

A Riemannian Network for SPD Matrix Learning

Symmetric Positive Definite (SPD) matrix learning methods have become po...
research
09/09/2015

Dictionary Learning and Sparse Coding for Third-order Super-symmetric Tensors

Super-symmetric tensors - a higher-order extension of scatter matrices -...
research
10/13/2016

Infinite-dimensional Log-Determinant divergences II: Alpha-Beta divergences

This work presents a parametrized family of divergences, namely Alpha-Be...
research
04/24/2022

Unsupervised Learning Discriminative MIG Detectors in Nonhomogeneous Clutter

Principal component analysis (PCA) is a commonly used pattern analysis m...

Please sign up or login with your details

Forgot password? Click here to reset