Loading…
Weakly supervised temporal action localization with actionness-guided false positive suppression
Weakly supervised temporal action localization aims to locate the temporal boundaries of action instances in untrimmed videos using video-level labels and assign them the corresponding action category. Generally, it is solved by a pipeline called “localization-by-classification”, which finds the act...
Saved in:
Published in: | Neural networks 2024-07, Vol.175, p.106307-106307, Article 106307 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Weakly supervised temporal action localization aims to locate the temporal boundaries of action instances in untrimmed videos using video-level labels and assign them the corresponding action category. Generally, it is solved by a pipeline called “localization-by-classification”, which finds the action instances by classifying video snippets. However, since this approach optimizes the video-level classification objective, the generated activation sequences often suffer interference from class-related scenes, resulting in a large number of false positives in the prediction results. Many existing works treat background as an independent category, forcing models to learn to distinguish background snippets. However, under weakly supervised conditions, the background information is fuzzy and uncertain, making this method extremely difficult. To alleviate the impact of false positives, we propose a new actionness-guided false positive suppression framework. Our method seeks to suppress false positive backgrounds without introducing the background category. Firstly, we propose a self-training actionness branch to learn class-agnostic actionness, which can minimize the interference of class-related scene information by ignoring the video labels. Secondly, we propose a false positive suppression module to mine false positive snippets and suppress them. Finally, we introduce the foreground enhancement module, which guides the model to learn the foreground with the help of the attention mechanism as well as class-agnostic actionness. We conduct extensive experiments on three benchmarks (THUMOS14, ActivityNet1.2, and ActivityNet1.3). The results demonstrate the effectiveness of our method in suppressing false positives and it achieves the state-of-the-art performance. Code: https://github.com/lizhilin-ustc/AFPS. |
---|---|
ISSN: | 0893-6080 1879-2782 |
DOI: | 10.1016/j.neunet.2024.106307 |