25 years of CNNs: Can we compare to human abstraction capabilities?

07/28/2016
by   Sebastian Stabinger, et al.
0

We try to determine the progress made by convolutional neural networks over the past 25 years in classifying images into abstractc lasses. For this purpose we compare the performance of LeNet to that of GoogLeNet at classifying randomly generated images which are differentiated by an abstract property (e.g., one class contains two objects of the same size, the other class two objects of different sizes). Our results show that there is still work to do in order to solve vision problems humans are able to solve without much difficulty.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2016

Learning Abstract Classes using Deep Learning

Humans are generally good at learning abstract concepts about objects an...
research
03/21/2018

Assessing Shape Bias Property of Convolutional Neural Networks

It is known that humans display "shape bias" when classifying new items,...
research
12/11/2017

Can We Teach Computers to Understand Art? Domain Adaptation for Enhancing Deep Networks Capacity to De-Abstract Art

Humans comprehend a natural scene at a single glance; painters and other...
research
01/19/2020

SlideImages: A Dataset for Educational Image Classification

In the past few years, convolutional neural networks (CNNs) have achieve...
research
11/05/2019

Congestion Analysis of Convolutional Neural Network-Based Pedestrian Counting Methods on Helicopter Footage

Over the past few years, researchers have presented many different appli...

Please sign up or login with your details

Forgot password? Click here to reset