One weird trick for parallelizing convolutional neural networks

04/23/2014
by   Alex Krizhevsky, et al.
0

I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

READ FULL TEXT

page 2

page 4

research
12/20/2013

Multi-GPU Training of ConvNets

In this work we evaluate different approaches to parallelize computation...
research
12/19/2022

VC dimensions of group convolutional neural networks

We study the generalization capacity of group convolutional neural netwo...
research
04/16/2018

Training convolutional neural networks with megapixel images

To train deep convolutional neural networks, the input data and the inte...
research
06/21/2018

Lamarckian Evolution of Convolutional Neural Networks

Convolutional neural networks belong to the most successul image classif...
research
05/23/2019

Learning the Representations of Moist Convection with Convolutional Neural Networks

The representations of atmospheric moist convection in general circulati...
research
03/19/2019

Kernel-based Translations of Convolutional Networks

Convolutional Neural Networks, as most artificial neural networks, are c...
research
08/18/2021

Combining Neuro-Evolution of Augmenting Topologies with Convolutional Neural Networks

Current deep convolutional networks are fixed in their topology. We exp...

Please sign up or login with your details

Forgot password? Click here to reset