Fast and Provably Convergent Algorithms for Gromov-Wasserstein in Graph Learning

05/17/2022
by   Jiajin Li, et al.
0

In this paper, we study the design and analysis of a class of efficient algorithms for computing the Gromov-Wasserstein (GW) distance tailored to large-scale graph learning tasks. Armed with the Luo-Tseng error bound condition <cit.>, two proposed algorithms, called Bregman Alternating Projected Gradient (BAPG) and hybrid Bregman Proximal Gradient (hBPG) are proven to be (linearly) convergent. Upon task-specific properties, our analysis further provides novel theoretical insights to guide how to select the best fit method. As a result, we are able to provide comprehensive experiments to validate the effectiveness of our methods on a host of tasks, including graph alignment, graph partition, and shape matching. In terms of both wall-clock time and modeling performance, the proposed methods achieve state-of-the-art results.

READ FULL TEXT

page 6

page 19

research
03/12/2023

A Convergent Single-Loop Algorithm for Relaxation of Gromov-Wasserstein in Graph Data

In this work, we present the Bregman Alternating Projected Gradient (BAP...
research
02/09/2023

Outlier-Robust Gromov Wasserstein for Graph Data

Gromov Wasserstein (GW) distance is a powerful tool for comparing and al...
research
02/12/2021

Two-sample Test with Kernel Projected Wasserstein Distance

We develop a kernel projected Wasserstein distance for the two-sample te...
research
03/12/2020

Wasserstein-based Graph Alignment

We propose a novel method for comparing non-aligned graphs of different ...
research
01/25/2022

On the Feasible Region of Efficient Algorithms for Attributed Graph Alignment

Graph alignment aims at finding the vertex correspondence between two co...
research
12/18/2021

FlowPool: Pooling Graph Representations with Wasserstein Gradient Flows

In several machine learning tasks for graph structured data, the graphs ...

Please sign up or login with your details

Forgot password? Click here to reset