Algebraic Learning: Towards Interpretable Information Modeling

03/13/2022
by   Tong Owen Yang, et al.
0

Along with the proliferation of digital data collected using sensor technologies and a boost of computing power, Deep Learning (DL) based approaches have drawn enormous attention in the past decade due to their impressive performance in extracting complex relations from raw data and representing valuable information. Meanwhile, though, rooted in its notorious black-box nature, the appreciation of DL has been highly debated due to the lack of interpretability. On the one hand, DL only utilizes statistical features contained in raw data while ignoring human knowledge of the underlying system, which results in both data inefficiency and trust issues; on the other hand, a trained DL model does not provide to researchers any extra insight about the underlying system beyond its output, which, however, is the essence of most fields of science, e.g. physics and economics. This thesis addresses the issue of interpretability in general information modeling and endeavors to ease the problem from two scopes. Firstly, a problem-oriented perspective is applied to incorporate knowledge into modeling practice, where interesting mathematical properties emerge naturally which cast constraints on modeling. Secondly, given a trained model, various methods could be applied to extract further insights about the underlying system. These two pathways are termed as guided model design and secondary measurements. Remarkably, a novel scheme emerges for the modeling practice in statistical learning: Algebraic Learning (AgLr). Instead of being restricted to the discussion of any specific model, AgLr starts from idiosyncrasies of a learning task itself and studies the structure of a legitimate model class. This novel scheme demonstrates the noteworthy value of abstract algebra for general AI, which has been overlooked in recent progress, and could shed further light on interpretable information modeling.

READ FULL TEXT

page 17

page 18

research
02/05/2023

Explainable Machine Learning: The Importance of a System-Centric Perspective

The landscape in the context of several signal processing applications a...
research
10/16/2020

Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

The recent series of innovations in deep learning (DL) have shown enormo...
research
12/05/2021

Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View

The increasing availability of large collections of electronic health re...
research
07/14/2023

Looking deeper into interpretable deep learning in neuroimaging: a comprehensive survey

Deep learning (DL) models have been popular due to their ability to lear...
research
03/14/2021

A new interpretable unsupervised anomaly detection method based on residual explanation

Despite the superior performance in modeling complex patterns to address...
research
07/05/2023

Transgressing the boundaries: towards a rigorous understanding of deep learning and its (non-)robustness

The recent advances in machine learning in various fields of application...
research
04/25/2021

Deep Probabilistic Graphical Modeling

Probabilistic graphical modeling (PGM) provides a framework for formulat...

Please sign up or login with your details

Forgot password? Click here to reset