Modeling Spatial and Temporal Cues for Multi-label Facial Action Unit Detection

08/02/2016
by   Wen-Sheng Chu, et al.
0

Facial action units (AUs) are essential to decode human facial expressions. Researchers have focused on training AU detectors with a variety of features and classifiers. However, several issues remain. These are spatial representation, temporal modeling, and AU correlation. Unlike most studies that tackle these issues separately, we propose a hybrid network architecture to jointly address them. Specifically, spatial representations are extracted by a Convolutional Neural Network (CNN), which, as analyzed in this paper, is able to reduce person-specific biases caused by hand-crafted features (eg, SIFT and Gabor). To model temporal dependencies, Long Short-Term Memory (LSTMs) are stacked on top of these representations, regardless of the lengths of input videos. The outputs of CNNs and LSTMs are further aggregated into a fusion network to produce per-frame predictions of 12 AUs. Our network naturally addresses the three issues, and leads to superior performance compared to existing methods that consider these issues independently. Extensive experiments were conducted on two large spontaneous datasets, GFT and BP4D, containing more than 400,000 frames coded with 12 AUs. On both datasets, we report significant improvement over a standard multi-label CNN and feature-based state-of-the-art. Finally, we provide visualization of the learned AU models, which, to our best knowledge, reveal how machines see facial AUs for the first time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset