The Bias-Expressivity Trade-off

11/09/2019
by   Julius Lauw, et al.
0

Learning algorithms need bias to generalize and perform better than random guessing. We examine the flexibility (expressivity) of biased algorithms. An expressive algorithm can adapt to changing training data, altering its outcome based on changes in its input. We measure expressivity by using an information-theoretic notion of entropy on algorithm outcome distributions, demonstrating a trade-off between bias and expressivity. To the degree an algorithm is biased is the degree to which it can outperform uniform random sampling, but is also the degree to which is becomes inflexible. We derive bounds relating bias to expressivity, proving the necessary trade-offs inherent in trying to create strongly performing yet flexible algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2019

Adaptive Trade-Offs in Off-Policy Learning

A great variety of off-policy learning algorithms exist in the literatur...
research
07/13/2019

The Futility of Bias-Free Learning and Search

Building on the view of machine learning as search, we demonstrate the n...
research
10/21/2021

Statistical discrimination in learning agents

Undesired bias afflicts both human and algorithmic decision making, and ...
research
01/08/2020

Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling

We establish an exactly tight relation between reversible pebblings of g...
research
02/16/2023

Preventing Discriminatory Decision-making in Evolving Data Streams

Bias in machine learning has rightly received significant attention over...
research
07/11/2023

Cognitive Bias and Belief Revision

In this paper we formalise three types of cognitive bias within the fram...
research
10/04/2021

An Empirical Investigation of Learning from Biased Toxicity Labels

Collecting annotations from human raters often results in a trade-off be...

Please sign up or login with your details

Forgot password? Click here to reset