Persistence-based operators in machine learning

12/28/2022
by   Mattia G. Bergomi, et al.
0

Artificial neural networks can learn complex, salient data features to achieve a given task. On the opposite end of the spectrum, mathematically grounded methods such as topological data analysis allow users to design analysis pipelines fully aware of data constraints and symmetries. We introduce a class of persistence-based neural network layers. Persistence-based layers allow the users to easily inject knowledge about symmetries (equivariance) respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.

READ FULL TEXT

page 4

page 5

page 6

page 8

research
07/20/2023

Addressing caveats of neural persistence with deep graph persistence

Neural Persistence is a prominent measure for quantifying neural network...
research
03/27/2022

An Introduction to Multiparameter Persistence

In topological data analysis (TDA), one often studies the shape of data ...
research
12/23/2018

Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology

While many approaches to make neural networks more fathomable have been ...
research
08/09/2023

Decorrelating neurons using persistence

We propose a novel way to improve the generalisation capacity of deep le...
research
03/16/2021

Learning Hyperbolic Representations of Topological Features

Learning task-specific representations of persistence diagrams is an imp...
research
11/26/2015

On randomization of neural networks as a form of post-learning strategy

Today artificial neural networks are applied in various fields - enginee...
research
06/03/2021

Homological Time Series Analysis of Sensor Signals from Power Plants

In this paper, we use topological data analysis techniques to construct ...

Please sign up or login with your details

Forgot password? Click here to reset