Poster
in
Workshop: NeurIPS 2023 Workshop on Diffusion Models
Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models
Hyeonho Jeong · Jong Chul Ye
Recent endeavors in video editing have showcased promising results in single-attribute editing or style transfer tasks.However, when confronted with the complexities of multi-attribute editing scenarios, they exhibit shortcomings such as omitting intended attribute changes, modifying the wrong elements of the input video, and failing to preserve regions of the input video that should remain intact.To address this, here we present a novel grounding-guided video-to-video translation framework called Ground-A-Video for multi-attribute video editing.Ground-A-Video attains temporally consistent multi-attribute editing of input videos in a training-free manner without aforementioned shortcomings.Central to our method is the introduction of Cross-Frame Gated Attention which incorporates groundings information into the latent representations in a temporally consistent fashion, along with Modulated Cross-Attention and optical flow guided inverted latents smoothing.Extensive experiments and applications demonstrate that Ground-A-Video's zero-shot capacity outperforms other baseline methods in terms of edit-accuracy and frame consistency.Further results are provided at our project page (http://ground-a-video.github.io).