Loading…

KalmanFlow 2.0: Efficient Video Optical Flow Estimation via Context-Aware Kalman Filtering

Recent studies on optical flow typically focus on the estimation of the single flow field in between a pair of images but pay little attention to the multiple consecutive flow fields in a longer video sequence. In this paper, we propose an efficient video optical flow estimation method by exploiting...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing 2019-09, Vol.28 (9), p.4233-4246
Main Authors: Bao, Wenbo, Zhang, Xiaoyun, Chen, Li, Gao, Zhiyong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent studies on optical flow typically focus on the estimation of the single flow field in between a pair of images but pay little attention to the multiple consecutive flow fields in a longer video sequence. In this paper, we propose an efficient video optical flow estimation method by exploiting the temporal coherence and context dynamics under a Kalman filtering system. In this system, pixel's motion flow is first formulated as a second-order time-variant state vector and then optimally estimated according to the measurement and system noise levels within the system by maximum a posteriori criteria. Specifically, we evaluate the measurement noise according to the flow's temporal derivative, spatial gradient, and warping error. We determine the system noise based on the similarity of contextual information, which is represented by the compact features learned by pre-trained convolutional neural networks. The context-aware Kalman filtering helps improve the robustness of our method against abrupt change of light and occlusion/dis-occlusion in complicated scenes. The experimental results and analyses on the MPI Sintel, Monkaa, and Driving video datasets demonstrate that the proposed method performs favorably against the state-of-the-art approaches.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2019.2903656