An Analysis of Ability in Deep Neural Networks

by   John P. Lalor, et al.
University of Massachusetts Amherst

Deep neural networks (DNNs) have made significant progress in a number of Machine Learning applications. However without a consistent set of evaluation tasks, interpreting performance across test datasets is impossible. In most previous work, characteristics of individual data points are not considered during evaluation, and each data point is treated equally. Using Item Response Theory (IRT) from psychometrics it is possible to model characteristics of specific data points that then inform an estimate of model ability as compared to a population of humans. We report the results of several experiments to determine how different Deep Neural Network (DNN) models perform under different training circumstances with respect to ability. As DNNs train on larger datasets, performance begins to look like human performance under the assumptions of IRT models. That is, easy questions start to have a higher probability of being answered correctly than harder questions. We also report the results of additional analyses regarding model robustness to noise and performance as a function of training set size that further inform our main conclusion


page 1

page 2

page 3

page 4


CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability

Item Response Theory (IRT) allows for measuring ability of Machine Learn...

Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds

Incorporating Item Response Theory (IRT) into NLP tasks can provide valu...

Understanding the Ability of Deep Neural Networks to Count Connected Components in Images

Humans can count very fast by subitizing, but slow substantially as the ...

Peeking inside the Black Box: Interpreting Deep Learning Models for Exoplanet Atmospheric Retrievals

Deep learning algorithms are growing in popularity in the field of exopl...

SGAD: Soft-Guided Adaptively-Dropped Neural Network

Deep neural networks (DNNs) have been proven to have many redundancies. ...

A Case For Adaptive Deep Neural Networks in Edge Computing

Edge computing offers an additional layer of compute infrastructure clos...

Comments on "Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?"

In a recently published paper [1], it is shown that deep neural networks...

Please sign up or login with your details

Forgot password? Click here to reset