RankNEAT: Outperforming Stochastic Gradient Search in Preference Learning Tasks

04/14/2022
by   Kosmas Pinitas, et al.
7

Stochastic gradient descent (SGD) is a premium optimization method for training neural networks, especially for learning objectively defined labels such as image objects and events. When a neural network is instead faced with subjectively defined labels–such as human demonstrations or annotations–SGD may struggle to explore the deceptive and noisy loss landscapes caused by the inherent bias and subjectivity of humans. While neural networks are often trained via preference learning algorithms in an effort to eliminate such data noise, the de facto training methods rely on gradient descent. Motivated by the lack of empirical studies on the impact of evolutionary search to the training of preference learners, we introduce the RankNEAT algorithm which learns to rank through neuroevolution of augmenting topologies. We test the hypothesis that RankNEAT outperforms traditional gradient-based preference learning within the affective computing domain, in particular predicting annotated player arousal from the game footage of three dissimilar games. RankNEAT yields superior performances compared to the gradient-based preference learner (RankNet) in the majority of experiments since its architecture optimization capacity acts as an efficient feature selection mechanism, thereby, eliminating overfitting. Results suggest that RankNEAT is a viable and highly efficient evolutionary alternative to preference learning.

READ FULL TEXT

page 3

page 7

research
08/03/2018

Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data

Neural networks have many successful applications, while much less theor...
research
08/12/2015

On the Convergence of SGD Training of Neural Networks

Neural networks are usually trained by some form of stochastic gradient ...
research
01/21/2023

Genetically Modified Wolf Optimization with Stochastic Gradient Descent for Optimising Deep Neural Networks

When training Convolutional Neural Networks (CNNs) there is a large emph...
research
06/25/2021

Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent

Although the optimization objectives for learning neural networks are hi...
research
12/21/2020

Optimizing Deep Neural Networks through Neuroevolution with Stochastic Gradient Descent

Deep neural networks (DNNs) have achieved remarkable success in computer...
research
01/31/2023

Patch Gradient Descent: Training Neural Networks on Very Large Images

Traditional CNN models are trained and tested on relatively low resoluti...
research
05/26/2020

Inherent Noise in Gradient Based Methods

Previous work has examined the ability of larger capacity neural network...

Please sign up or login with your details

Forgot password? Click here to reset