An Analysis of Ability in Deep Neural Networks

02/15/2017
by   John P. Lalor, et al.
0

Deep neural networks (DNNs) have made significant progress in a number of Machine Learning applications. However without a consistent set of evaluation tasks, interpreting performance across test datasets is impossible. In most previous work, characteristics of individual data points are not considered during evaluation, and each data point is treated equally. Using Item Response Theory (IRT) from psychometrics it is possible to model characteristics of specific data points that then inform an estimate of model ability as compared to a population of humans. We report the results of several experiments to determine how different Deep Neural Network (DNN) models perform under different training circumstances with respect to ability. As DNNs train on larger datasets, performance begins to look like human performance under the assumptions of IRT models. That is, easy questions start to have a higher probability of being answered correctly than harder questions. We also report the results of additional analyses regarding model robustness to noise and performance as a function of training set size that further inform our main conclusion

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2017

CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability

Item Response Theory (IRT) allows for measuring ability of Machine Learn...
research
08/29/2019

Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds

Incorporating Item Response Theory (IRT) into NLP tasks can provide valu...
research
01/05/2021

Understanding the Ability of Deep Neural Networks to Count Connected Components in Images

Humans can count very fast by subitizing, but slow substantially as the ...
research
11/23/2020

Peeking inside the Black Box: Interpreting Deep Learning Models for Exoplanet Atmospheric Retrievals

Deep learning algorithms are growing in popularity in the field of exopl...
research
07/04/2018

SGAD: Soft-Guided Adaptively-Dropped Neural Network

Deep neural networks (DNNs) have been proven to have many redundancies. ...
research
08/04/2020

A Case For Adaptive Deep Neural Networks in Edge Computing

Edge computing offers an additional layer of compute infrastructure clos...
research
01/08/2019

Comments on "Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?"

In a recently published paper [1], it is shown that deep neural networks...

Please sign up or login with your details

Forgot password? Click here to reset