Actions Generation from Captions

02/14/2019
by   Xuan Liang, et al.
0

Sequence transduction models have been widely explored in many natural language processing tasks. However, the target sequence usually consists of discrete tokens which represent word indices in a given vocabulary. We barely see the case where target sequence is composed of continuous vectors, where each vector is an element of a time series taken successively in a temporal domain. In this work, we introduce a new data set, named Action Generation Data Set (AGDS) which is specifically designed to carry out the task of caption-to-action generation. This data set contains caption-action pairs. The caption is comprised of a sequence of words describing the interactive movement between two people, and the action is a captured sequence of poses representing the movement. This data set is introduced to study the ability of generating continuous sequences through sequence transduction models. We also propose a model to innovatively combine Multi-Head Attention (MHA) and Generative Adversarial Network (GAN) together. In our model, we have one generator to generate actions from captions and three discriminators where each of them is designed to carry out a unique functionality: caption-action consistency discriminator, pose discriminator and pose transition discriminator. This novel design allowed us to achieve plausible generation performance which is demonstrated in the experiments.

READ FULL TEXT
research
05/26/2018

Human Action Generation with Generative Adversarial Networks

Inspired by the recent advances in generative models, we introduce a hum...
research
03/10/2021

SocialInteractionGAN: Multi-person Interaction Sequence Generation

Prediction of human actions in social interactions has important applica...
research
10/15/2017

Text2Action: Generative Adversarial Synthesis from Language to Action

In this paper, we propose a generative model which learns the relationsh...
research
03/23/2020

Caption Generation of Robot Behaviors based on Unsupervised Learning of Action Segments

Bridging robot action sequences and their natural language captions is a...
research
05/23/2020

AnimGAN: A Spatiotemporally-Conditioned Generative Adversarial Network for Character Animation

Producing realistic character animations is one of the essential tasks i...
research
03/30/2020

Pruned Wasserstein Index Generation Model and wigpy Package

Recent proposal of Wasserstein Index Generation model (WIG) has shown a ...
research
04/23/2021

Online recognition of unsegmented actions with hierarchical SOM architecture

Automatic recognition of an online series of unsegmented actions require...

Please sign up or login with your details

Forgot password? Click here to reset