Loading…

LO2net: Global–Local Semantics Coupled Network for scene-specific video foreground extraction with less supervision

Video foreground extraction has been widely applied to quantitative fields and attracts great attention all over the world. Nevertheless, the performance of a such method can be easily reduced due to the dizzy environment. To tackle this problem, the global semantics (e.g., background statistics) an...

Full description

Saved in:
Bibliographic Details
Published in:Pattern analysis and applications : PAA 2023, Vol.26 (4), p.1671-1683
Main Authors: Ruan, Tao, Wei, Shikui, Zhao, Yao, Guo, Baoqing, Yu, Zujun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Video foreground extraction has been widely applied to quantitative fields and attracts great attention all over the world. Nevertheless, the performance of a such method can be easily reduced due to the dizzy environment. To tackle this problem, the global semantics (e.g., background statistics) and the local semantics (e.g., boundary areas) can be utilized to better distinguish foreground objects from the complex background. In this paper, we investigate how to effectively leverage the above two kinds of semantics. For global semantics, two convolutional modules are designed to take advantage of data-level background priors and feature-level multi-scale characteristics, respectively; for local semantics, another module is further put forward to be aware of the semantic edges between foreground and background. The three modules are intertwined with each other, yielding a simple yet effective deep framework named g L O bal– L O cal Semantics Coupled Network ( L O 2 Net), which is end-to-end trainable in a scene-specific manner. Benefiting from the L O 2 Net, we achieve superior performance on multiple public datasets, with less supervision trained against several state-of-the-art methods.
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-023-01193-5