Loading…
Learning deep convolutional descriptor aggregation for efficient visual tracking
Visual trackers have achieved a high-level performance from deep features, but many limitations remain. Online trackers suffer from low speed while using deep features for parameter updating, and deep trackers trained offline demonstrate data hunger. To meet these challenges, our work aims to mine t...
Saved in:
Published in: | Neural computing & applications 2022-03, Vol.34 (5), p.3745-3765 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Visual trackers have achieved a high-level performance from deep features, but many limitations remain. Online trackers suffer from low speed while using deep features for parameter updating, and deep trackers trained offline demonstrate data hunger. To meet these challenges, our work aims to mine the target representation capability of a pre-trained model and presents deep convolutional descriptor aggregation (DCDA) for visual tracking. Based on spatial and semantic priors, we propose an edge-aware selection (EAS) and a central-aware selection (CAS) method to aggregate the accuracy-aware and robustness-aware features. To make full use of the scene context, our method is derived from one-shot learning by designing a dedicated regression process that is capable of predicting discriminative model in a few iterations. By exploiting robustness feature aggregation, the accuracy feature aggregation, and the discriminative regression, our DCDA with Siamese tracking architecture not only enhances the target prediction capacity, but also achieves a low-cost reuse of the pre-trained model. Comprehensive experiments on OTB-100, VOT2016, VOT2017, VOT2020, NFS30, and NFS240 show that our DCDA tracker achieves state-of-the-art performance with a high running speed of 65 FPS. The source code and all the experimental results of this work will be made public at
https://github.com/Gitlyz007/DCDA_Tracker
. |
---|---|
ISSN: | 0941-0643 1433-3058 |
DOI: | 10.1007/s00521-021-06638-8 |