3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models

1Alibaba DAMO Academy, 2Hupan Lab, 3Zhejiang University

Try-on videos generated by 3DV-TON. Our method can handle various types of clothing and body poses, while accurately restoring clothing details and maintaining consistent texture motion.

Abstract

Video try-on replaces clothing in videos with target garments. Existing methods struggle to generate high-quality and temporally consistent results when handling complex clothing patterns and diverse body poses. We present 3DV-TON, a novel diffusion-based framework for generating high-fidelity and temporally consistent video try-on results. Our approach employs generated animatable textured 3D meshes as explicit frame-level guidance, alleviating the issue of models over-focusing on appearance fidelity at the expanse of motion coherence. This is achieved by enabling direct reference to consistent garment texture movements throughout video sequences. The proposed method features an adaptive pipeline for generating dynamic 3D guidance: (1) selecting a keyframe for initial 2D image try-on, followed by (2) reconstructing and animating a textured 3D mesh synchronized with original video poses. We further introduce a robust rectangular masking strategy that successfully mitigates artifact propagation caused by leaking clothing information during dynamic human and garment movements. To advance video try-on research, we introduce HR-VVT, a high-resolution benchmark dataset containing 130 videos with diverse clothing types and scenarios. Quantitative and qualitative results demonstrate our superior performance over existing methods.

Overview


Interpolate start reference image.
Interpolate start reference image.
Overall pipeline of 3DV-TON. Given a video, we first use our 3D guidance pipeline to select a frame I adaptively, then reconstruct a textured 3D guidance and animate it align with the original video, i.e. V. We employ a guidance feature extractor for the clothing image C and the try-on images Ct, and perform feature fusion using the self-attentions in the denoising UNet.

Video Try-on Results Comparison

     Input video                       Clothes                           ViViD                     CatV^2TON                       Ours

More video try-on results of our method

Same person+Different clothes

Different person--Different clothes

Dress:

Bottoms:

Tops: