An Empirical Evaluation Study on the Training of SDC Features for Dense Pixel Matching

04/12/2019
by   René Schuster, et al.
0

Training a deep neural network is a non-trivial task. Not only the tuning of hyperparameters, but also the gathering and selection of training data, the design of the loss function, and the construction of training schedules is important to get the most out of a model. In this study, we perform a set of experiments all related to these issues. The model for which different training strategies are investigated is the recently presented SDC descriptor network (stacked dilated convolution). It is used to describe images on pixel-level for dense matching tasks. Our work analyzes SDC in more detail, validates some best practices for training deep neural networks, and provides insights into training with multiple domain data.

READ FULL TEXT

page 3

page 4

research
04/05/2019

SDC - Stacked Dilated Convolution: A Unified Descriptor Network for Dense Matching Tasks

Dense pixel matching is important for many computer vision tasks such as...
research
06/24/2020

Retrospective Loss: Looking Back to Improve Training of Deep Neural Networks

Deep neural networks (DNNs) are powerful learning machines that have ena...
research
08/02/2017

On the Importance of Consistency in Training Deep Neural Networks

We explain that the difficulties of training deep neural networks come f...
research
10/19/2020

How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers

Many optimizers have been proposed for training deep neural networks, an...
research
07/10/2018

Deep Learning on Low-Resource Datasets

In training a deep learning system to perform audio transcription, two p...
research
04/26/2015

Comparison of Training Methods for Deep Neural Networks

This report describes the difficulties of training neural networks and i...

Please sign up or login with your details

Forgot password? Click here to reset