List of Topics:
Location Research Breakthrough Possible @S-Logix pro@slogix.in

Office Address

Social List

TDM: Temporally-Consistent Diffusion Model for All-in-One Real-World Video Restoration - 2025

tdm-temporally-consistent-diffusion-model-for-all-in-one-real-world-video-restoration.png

Research Paper on TDM: Temporally-Consistent Diffusion Model for All-in-One Real-World Video Restoration

Research Area:  Machine Learning

Abstract:

In this paper, we propose the first diffusion-based all-in-one video restoration method that utilizes the power of a pre-trained Stable Diffusion and a fine-tuned ControlNet. Our method can restore various types of video degradation with a single unified model, overcoming the limitation of standard methods that require specific models for each restoration task. Our contributions include an efficient training strategy with Task Prompt Guidance (TPG) for diverse restoration tasks, an inference strategy that combines Denoising Diffusion Implicit Models~(DDIM) inversion with a novel Sliding Window Cross-Frame Attention (SW-CFA) mechanism for enhanced content preservation and temporal consistency, and a scalable pipeline that makes our method all-in-one to adapt to different video restoration tasks. Through extensive experiments on five video restoration tasks, we demonstrate the superiority of our method in generalization capability to real-world videos and temporal consistency preservation over existing state-of-the-art methods. Our method advances the video restoration task by providing a unified solution that enhances video quality across multiple applications.

Keywords:  

Author(s) Name:  Yizhou Li, Zihua Liu, Yusuke Monno, Masatoshi Okutomi

Journal name:  Computer Vision and Pattern Recognition

Conferrence name:  

Publisher name:  ArXiv

DOI:  10.48550/arXiv.2501.02269

Volume Information:  Volume: 4, (2025)