Loading…

13‐3: Invited Paper: Video Frame Interpolation via Structure Motion based Iterative Feature Fusion

Video Frame Interpolation synthesizes non‐existent images between adjacent frames, with the aim of providing a smooth and consistent visual experience. Two approaches for solving this challenging task are optical flow based and kernel‐based methods. In existing works, optical flow based methods can...

Full description

Saved in:
Bibliographic Details
Published in:SID International Symposium Digest of technical papers 2021-05, Vol.52 (1), p.157-160
Main Authors: Li, Xi, Cao, Meng, Tang, Yingying, Johnston, Scott, Hong, Zhendong, Ma, Huimin, Shan, Jiulong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Video Frame Interpolation synthesizes non‐existent images between adjacent frames, with the aim of providing a smooth and consistent visual experience. Two approaches for solving this challenging task are optical flow based and kernel‐based methods. In existing works, optical flow based methods can provide accurate point‐to‐point motion description, however, they lack constraints on object structure. On the contrary, kernel‐based methods focus on structural alignment, which relies on semantic and apparent features, but tends to blur results. Based on these observations, we propose a structure‐motion based iterative fusion method. The framework is an end‐to‐end learnable structure with two stages. First, interpolated frames are synthesized by structure‐based and motion‐based learning branches respectively, then, an iterative refinement module is established via spatial and temporal feature integration. Inspired by the observation that audiences have different visual preferences on foreground and background objects, we for the first time propose to use saliency masks in the evaluation processes of the task of video frame interpolation. Experimental results on three typical benchmarks show that the proposed method achieves superior performance on all evaluation metrics over the state‐of‐the‐art methods, even when our models are trained with only one‐tenth of the data other methods use.
ISSN:0097-966X
2168-0159
DOI:10.1002/sdtp.14635