Loading…

Collaborative Local-Global Learning for Temporal Action Proposal

Temporal action proposal generation is an essential and challenging task in video understanding, which aims to locate the temporal intervals that likely contain the actions of interest. Although great progress has been made, the problem is still far from being well solved. In particular, prevalent m...

Full description

Saved in:
Bibliographic Details
Published in:ACM transactions on intelligent systems and technology 2021-12, Vol.12 (5), p.1-14
Main Authors: Zhu, Yisheng, Han, Hu, Liu, Guangcan, Liu, Qingshan
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Temporal action proposal generation is an essential and challenging task in video understanding, which aims to locate the temporal intervals that likely contain the actions of interest. Although great progress has been made, the problem is still far from being well solved. In particular, prevalent methods can handle well only the local dependencies (i.e., short-term dependencies) among adjacent frames but are generally powerless in dealing with the global dependencies (i.e., long-term dependencies) between distant frames. To tackle this issue, we propose CLGNet, a novel Collaborative Local-Global Learning Network for temporal action proposal. The majority of CLGNet is an integration of Temporal Convolution Network and Bidirectional Long Short-Term Memory, in which Temporal Convolution Network is responsible for local dependencies while Bidirectional Long Short-Term Memory takes charge of handling the global dependencies. Furthermore, an attention mechanism called the background suppression module is designed to guide our model to focus more on the actions. Extensive experiments on two benchmark datasets, THUMOS’14 and ActivityNet-1.3, show that the proposed method can outperform state-of-the-art methods, demonstrating the strong capability of modeling the actions with varying temporal durations.
ISSN:2157-6904
2157-6912
DOI:10.1145/3466181