Topos and Stacks of Deep Neural Networks

06/28/2021
by   Jean-Claude Belfiore, et al.
0

Every known artificial deep neural network (DNN) corresponds to an object in a canonical Grothendieck's topos; its learning dynamic corresponds to a flow of morphisms in this topos. Invariance structures in the layers (like CNNs or LSTMs) correspond to Giraud's stacks. This invariance is supposed to be responsible of the generalization property, that is extrapolation from learning data under constraints. The fibers represent pre-semantic categories (Culioli, Thom), over which artificial languages are defined, with internal logics, intuitionist, classical or linear (Girard). Semantic functioning of a network is its ability to express theories in such a language for answering questions in output about input data. Quantities and spaces of semantic information are defined by analogy with the homological interpretation of Shannon's entropy (P.Baudot and D.B. 2015). They generalize the measures found by Carnap and Bar-Hillel (1952). Amazingly, the above semantical structures are classified by geometric fibrant objects in a closed model category of Quillen, then they give rise to homotopical invariants of DNNs and of their semantic functioning. Intentional type theories (Martin-Loef) organize these objects and fibrations between them. Information contents and exchanges are analyzed by Grothendieck's derivators.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2021

Introducing the structural bases of typicality effects in deep learning

In this paper, we hypothesize that the effects of the degree of typicali...
research
10/22/2019

Explicitly Bayesian Regularizations in Deep Learning

Generalization is essential for deep learning. In contrast to previous w...
research
05/14/2021

Verification of Size Invariance in DNN Activations using Concept Embeddings

The benefits of deep neural networks (DNNs) have become of interest for ...
research
09/16/2020

Analysis of Generalizability of Deep Neural Networks Based on the Complexity of Decision Boundary

For supervised learning models, the analysis of generalization ability (...
research
10/18/2021

Permutation Invariance of Deep Neural Networks with ReLUs

Consider a deep neural network (DNN) that is being used to suggest the d...
research
08/22/2023

Approaching human 3D shape perception with neurally mappable models

Humans effortlessly infer the 3D shape of objects. What computations und...
research
05/28/2018

A Generalized Modality for Recursion

Nakano's later modality allows types to express that the output of a fun...

Please sign up or login with your details

Forgot password? Click here to reset