Loading…

A long video caption generation algorithm for big video data retrieval

Videos captured by people are often tied to certain important moments of their lives. But with the era of big data coming, the time required to retrieval and watch can be daunting. In this paper, novel techniques are proposed for the application of long video segmentation, which can effectively shor...

Full description

Saved in:
Bibliographic Details
Published in:Future generation computer systems 2019-04, Vol.93, p.583-595
Main Authors: Ding, Songtao, Qu, Shiru, Xi, Yuling, Wan, Shaohua
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Videos captured by people are often tied to certain important moments of their lives. But with the era of big data coming, the time required to retrieval and watch can be daunting. In this paper, novel techniques are proposed for the application of long video segmentation, which can effectively shorten the retrieval time. The motion extent of long video is detected by the improved of the spatio-temporal interest points (STIPs) detection algorithm. After that, the superframe segmentation of the filtered long video is performed to gain the interesting clip of long video. In the selection of keyframes, the region of interest is constructed by the use of the STIP already obtained on the video clips, and the saliency detection of these regions of interest is utilized to screen out video keyframes. Finally, we generate the video captions by adding attention vectors to the traditional LSTM. Our method is benchmarked on the VideoSet dataset, and evaluated by the BLEU, Meteor and Rouge. •A long video segmentation algorithm is proposed based on the detection of STIPs.•The dynamic clustering algorithm is adopted to construct the interesting segments.•We detect the keyframe by directly constructing the region of interest.•Our LSTM model is also influenced by the rules of attention mechanism.•We provide experimental results for different stages and get good performance.
ISSN:0167-739X
1872-7115
DOI:10.1016/j.future.2018.10.054