Towards Better Adversarial Synthesis of Human Images from Text

07/05/2021
by   Rania Briq, et al.
5

This paper proposes an approach that generates multiple 3D human meshes from text. The human shapes are represented by 3D meshes based on the SMPL model. The model's performance is evaluated on the COCO dataset, which contains challenging human shapes and intricate interactions between individuals. The model is able to capture the dynamics of the scene and the interactions between individuals based on text. We further show how using such a shape as input to image synthesis frameworks helps to constrain the network to synthesize humans with realistic human shapes.

READ FULL TEXT
research
07/13/2022

Implicit Neural Representations for Generative Modeling of Living Cell Shapes

Methods allowing the synthesis of realistic cell shapes could help gener...
research
05/01/2020

Adversarial Synthesis of Human Pose from Text

This work introduces the novel task of human pose synthesis from text. I...
research
04/30/2020

Polarization Human Shape and Pose Dataset

Polarization images are known to be able to capture polarized reflected ...
research
07/19/2022

ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model

We present ShapeCrafter, a neural network for recursive text-conditioned...
research
12/05/2019

CLOTH3D: Clothed 3D Humans

This work presents CLOTH3D, the first big scale synthetic dataset of 3D ...
research
07/21/2017

Semantic Image Synthesis via Adversarial Learning

In this paper, we propose a way of synthesizing realistic images directl...
research
06/11/2019

Shapes and Context: In-the-Wild Image Synthesis & Manipulation

We introduce a data-driven approach for interactively synthesizing in-th...

Please sign up or login with your details

Forgot password? Click here to reset