Parser-Free Virtual Try-on via Distilling Appearance Flows

by   Yuying Ge, et al.

Image virtual try-on aims to fit a garment image (target clothes) to a person image. Prior methods are heavily based on human parsing. However, slightly-wrong segmentation results would lead to unrealistic try-on images with large artifacts. Inaccurate parsing misleads parser-based methods to produce visually unrealistic results where artifacts usually occur. A recent pioneering work employed knowledge distillation to reduce the dependency of human parsing, where the try-on images produced by a parser-based method are used as supervisions to train a "student" network without relying on segmentation, making the student mimic the try-on ability of the parser-based model. However, the image quality of the student is bounded by the parser-based model. To address this problem, we propose a novel approach, "teacher-tutor-student" knowledge distillation, which is able to produce highly photo-realistic images without human parsing, possessing several appealing advantages compared to prior arts. (1) Unlike existing work, our approach treats the fake images produced by the parser-based method as "tutor knowledge", where the artifacts can be corrected by real "teacher knowledge", which is extracted from the real person images in a self-supervised way. (2) Other than using real images as supervisions, we formulate knowledge distillation in the try-on problem as distilling the appearance flows between the person image and the garment image, enabling us to find accurate dense correspondences between them to produce high-quality results. (3) Extensive evaluations show large superiority of our method (see Fig. 1).


page 1

page 2

page 3

page 5

page 6

page 7

page 8


Do Not Mask What You Do Not Need to Mask: a Parser-Free Virtual Try-On

The 2D virtual try-on task has recently attracted a great interest from ...

Spherical Knowledge Distillation

Knowledge distillation aims at obtaining a small but effective deep mode...

Multilingual AMR Parsing with Noisy Knowledge Distillation

We study multilingual AMR parsing from the perspective of knowledge dist...

DM-VTON: Distilled Mobile Real-time Virtual Try-On

The fashion e-commerce industry has witnessed significant growth in rece...

Distilling Neural Networks for Greener and Faster Dependency Parsing

The carbon footprint of natural language processing research has been in...

Image-Based Virtual Try-on System With Clothing-Size Adjustment

The conventional image-based virtual try-on method cannot generate fitti...

KTN: Knowledge Transfer Network for Learning Multi-person 2D-3D Correspondences

Human densepose estimation, aiming at establishing dense correspondences...

Please sign up or login with your details

Forgot password? Click here to reset