Adaptive n-ary Activation Functions for Probabilistic Boolean Logic

03/16/2022
by   Jed A. Duersch, et al.
0

Balancing model complexity against the information contained in observed data is the central challenge to learning. In order for complexity-efficient models to exist and be discoverable in high dimensions, we require a computational framework that relates a credible notion of complexity to simple parameter representations. Further, this framework must allow excess complexity to be gradually removed via gradient-based optimization. Our n-ary, or n-argument, activation functions fill this gap by approximating belief functions (probabilistic Boolean logic) using logit representations of probability. Just as Boolean logic determines the truth of a consequent claim from relationships among a set of antecedent propositions, probabilistic formulations generalize predictions when antecedents, truth tables, and consequents all retain uncertainty. Our activation functions demonstrate the ability to learn arbitrary logic, such as the binary exclusive disjunction (p xor q) and ternary conditioned disjunction ( c ? p : q ), in a single layer using an activation function of matching or greater arity. Further, we represent belief tables using a basis that directly associates the number of nonzero parameters to the effective arity of the belief function, thus capturing a concrete relationship between logical complexity and efficient parameter representations. This opens optimization approaches to reduce logical complexity by inducing parameter sparsity.

READ FULL TEXT

page 1

page 10

research
08/02/2018

The Quest for the Golden Activation Function

Deep Neural Networks have been shown to be beneficial for a variety of t...
research
12/23/2019

Learn-able parameter guided Activation Functions

In this paper, we explore the concept of adding learn-able slope and mea...
research
06/26/2018

Adaptive Blending Units: Trainable Activation Functions for Deep Neural Networks

The most widely used activation functions in current deep feed-forward n...
research
03/01/2016

Noisy Activation Functions

Common nonlinear activation functions used in neural networks can cause ...
research
10/22/2021

Logical Activation Functions: Logit-space equivalents of Boolean Operators

Neuronal representations within artificial neural networks are commonly ...
research
05/13/2022

Uninorm-like parametric activation functions for human-understandable neural models

We present a deep learning model for finding human-understandable connec...
research
07/08/2019

Copula Representations and Error Surface Projections for the Exclusive Or Problem

The exclusive or (xor) function is one of the simplest examples that ill...

Please sign up or login with your details

Forgot password? Click here to reset