Loading…

A multi-embedding neural model for incident video retrieval

•A state-of-the-art approach for video retrieval on several datasets.•An encoder-decoder for learning the spatial and temporal characteristics of videos.•Multi-resolution embedding of incident videos for similarity comparison. Many internet search engines have been developed, however, the retrieval...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2022-10, Vol.130, p.108807, Article 108807
Main Authors: Chiang, Ting-Hui, Tseng, Yi-Chun, Tseng, Yu-Chee
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A state-of-the-art approach for video retrieval on several datasets.•An encoder-decoder for learning the spatial and temporal characteristics of videos.•Multi-resolution embedding of incident videos for similarity comparison. Many internet search engines have been developed, however, the retrieval of video clips remains a challenge. This paper considers the retrieval of incident videos, which may contain more spatial and temporal semantics. We propose an encoder-decoder ConvLSTM model that explores multiple embeddings of a video to facilitate comparison of similarity between a pair of videos. The model is able to encode a video into an embedding that integrates both its spatial information and temporal semantics. Multiple video embeddings are then generated from coarse- and fine-grained features of a video to capture high- and low-level meanings. Subsequently, a learning-based comparative model is proposed to compare the similarity of two videos based on their embeddings. Extensive evaluations are presented and show that our model outperforms state-of-the-art methods for several video retrieval tasks on the FIVR-200K, CC_WEB_VIDEO, and EVVE datasets.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2022.108807