Research Area:  Machine Learning
Space-time video super-resolution (STVSR) is the task of interpolating videos with both Low Frame Rate (LFR) and Low Resolution (LR) to produce High-Frame-Rate (HFR) and also High-Resolution (HR) counterparts. The existing methods based on Convolutional Neural Network~(CNN) succeed in achieving visually satisfied results while suffer from slow inference speed due to their heavy architectures. We propose to resolve this issue by using a spatial-temporal transformer that naturally incorporates the spatial and temporal super resolution modules into a single model. Unlike CNN-based methods, we do not explicitly use separated building blocks for temporal interpolations and spatial super-resolutions; instead, we only use a single end-to-end transformer architecture. Specifically, a reusable dictionary is built by encoders based on the input LFR and LR frames, which is then utilized in the decoder part to synthesize the HFR and HR frames. Compared with the state-of-the-art TMNet cite{xu2021temporal}, our network is 60% smaller (4.5M vs 12.3M parameters) and 80% faster (26.2fps vs 14.3fps on 720×576 frames) without sacrificing much performance.
Keywords:  
Real-time Spatial Temporal Transformer
Space-Time
Video Super-Resolution
Convolutional Neural Network
Machine Learning
Author(s) Name:  Zhicheng Geng, Luming Liang, Tianyu Ding, Ilya Zharkov
Journal name:  Computer Vision and Pattern Recognition
Conferrence name:  
Publisher name:  arXiv:2203.14186
DOI:  10.48550/arXiv.2203.14186
Volume Information:  
Paper Link:   https://arxiv.org/abs/2203.14186