LSTM-Based Facial Performance Capture Using Embedding Between Expressions
We present a novel end-to-end framework for facial performance capture given a monocular video of an actor's face. Our framework are comprised of 2 parts. First, to extract the information in the frames, we optimize a triplet loss to learn the embedding space which ensures the semantically closer facial expressions are closer in the embedding space and the model can be transferred to distinguish the expressions that are not presented in the training dataset. Second, the embeddings are fed into an LSTM network to learn the deformation between frames. In the experiments, we demonstrated that compared to other methods, our method can distinguish the delicate motion around lips and significantly reduce jitters between the tracked meshes.
READ FULL TEXT