MoDi: Unconditional Motion Synthesis from Diverse Data

06/16/2022
by   Sigal Raab, et al.
0

The emergence of neural networks has revolutionized the field of motion synthesis. Yet, learning to unconditionally synthesize motions from a given distribution remains a challenging task, especially when the motions are highly diverse. In this work, we present MoDi - a generative model trained in a completely unsupervised setting from an extremely diverse, unstructured and unlabeled motion dataset. During inference, MoDi can synthesize high-quality, diverse motions that lay in a well-behaved and highly semantic latent space. We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered, facilitating various applications including, semantic editing, crowd simulation and motion interpolation. Our qualitative and quantitative experiments show that our framework achieves state-of-the-art synthesis quality that can follow the distribution of highly diverse motion datasets. Code and trained models are available at https://sigal-raab.github.io/MoDi.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset