Loading…

Content Temporal Relation Network for temporal action proposal generation

Temporal action proposal generation is an essential step for untrimmed video analysis and gains much attention from academia. However, most of the prior works predict the confidence score of each proposal separately and neglect the relations between proposals, limiting their performance. In this wor...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2024-05, Vol.149, p.110245, Article 110245
Main Authors: Gan, Ming-Gang, Zhang, Yan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Temporal action proposal generation is an essential step for untrimmed video analysis and gains much attention from academia. However, most of the prior works predict the confidence score of each proposal separately and neglect the relations between proposals, limiting their performance. In this work, we design a novel Content Temporal Relation Network (CTRNet) to generate temporal action proposals by exploring the content and temporal semantic relations between proposals simultaneously. Specifically, we design a proposal feature map generation layer to convert the temporal semantic relations of proposals into spatial relations. Based on the proposal feature map, we propose a content-temporal relation module, which applies a novel adaptive-dilated convolution to model the temporal semantic relations between proposals and designs a content-adaptive convolution operation to explore the content semantic relation between proposals. Considering the temporal and content semantic relations between proposals, CTRNet has learned discriminative proposal features to improve performance. Extensive experiments are performed on two mainstream temporal action detection datasets, and CTRNet significantly outperforms the previous state-of-the-art methods. The codes are available at https://github.com/YanZhang-bit/CTRNet. •Our method is the first framework to exploit the content and temporal semantic relations between proposals to generate temporal action proposals.•We propose a novel adaptive-dilate Conv, whose dilate rate is adaptive to the spatial position, to model the temporal semantic relations.•We adopt the attention mechanism to design a content-adaptive convolution operation to model the content semantic relations between proposals.•Our method outperforms the state-of-the-art methods on the THUMOS14 dataset.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.110245