Loading…

Semantically Consistent Video Inpainting with Conditional Diffusion Models

Current state-of-the-art methods for video inpainting typically rely on optical flow or attention-based approaches to inpaint masked regions by propagating visual information across frames. While such approaches have led to significant progress on standard benchmarks, they struggle with tasks that r...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-10
Main Authors: Green, Dylan, Harvey, William, Naderiparizi, Saeid, Niedoba, Matthew, Liu, Yunpeng, Liang, Xiaoxuan, Lavington, Jonathan, Zhang, Ke, Lioutas, Vasileios, Dabiri, Setareh, Scibior, Adam, Zwartsenberg, Berend, Wood, Frank
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Current state-of-the-art methods for video inpainting typically rely on optical flow or attention-based approaches to inpaint masked regions by propagating visual information across frames. While such approaches have led to significant progress on standard benchmarks, they struggle with tasks that require the synthesis of novel content that is not present in other frames. In this paper, we reframe video inpainting as a conditional generative modeling problem and present a framework for solving such problems with conditional video diffusion models. We introduce inpainting-specific sampling schemes which capture crucial long-range dependencies in the context, and devise a novel method for conditioning on the known pixels in incomplete frames. We highlight the advantages of using a generative approach for this task, showing that our method is capable of generating diverse, high-quality inpaintings and synthesizing new content that is spatially, temporally, and semantically consistent with the provided context.
ISSN:2331-8422