Object Detection with Spiking Neural Networks on Automotive Event Data
Automotive embedded algorithms have very high constraints in terms of latency, accuracy and power consumption. In this work, we propose to train spiking neural networks (SNNs) directly on data coming from event cameras to design fast and efficient automotive embedded applications. Indeed, SNNs are more biologically realistic neural networks where neurons communicate using discrete and asynchronous spikes, a naturally energy-efficient and hardware friendly operating mode. Event data, which are binary and sparse in space and time, are therefore the ideal input for spiking neural networks. But to date, their performance was insufficient for automotive real-world problems, such as detecting complex objects in an uncontrolled environment. To address this issue, we took advantage of the latest advancements in matter of spike backpropagation - surrogate gradient learning, parametric LIF, SpikingJelly framework - and of our new voxel cube event encoding to train 4 different SNNs based on popular deep learning networks: SqueezeNet, VGG, MobileNet, and DenseNet. As a result, we managed to increase the size and the complexity of SNNs usually considered in the literature. In this paper, we conducted experiments on two automotive event datasets, establishing new state-of-the-art classification results for spiking neural networks. Based on these results, we combined our SNNs with SSD to propose the first spiking neural networks capable of performing object detection on the complex GEN1 Automotive Detection event dataset.
READ FULL TEXT