Loading…
Predicting Content Similarity via Multimodal Modeling for Video-In-Video Advertising
Rapid development of mobile devices has led to explosive growth of videos and online platforms, which creates great demand for online advertising in videos. Existing advertising methods often aim to randomly select a time point as insertion position, which means that the video content is likely not...
Saved in:
Published in: | IEEE transactions on circuits and systems for video technology 2021-02, Vol.31 (2), p.569-581 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Rapid development of mobile devices has led to explosive growth of videos and online platforms, which creates great demand for online advertising in videos. Existing advertising methods often aim to randomly select a time point as insertion position, which means that the video content is likely not related to the ad content, resulting in unsatisfactory user experience. While previous works have neglected to understand rich semantics as well as multimodal information in video advertising, in contrast to previous works, we present an innovative method for video-in-video advertising using multimodal modeling. First, different pre-trained models are used to extract multimodal representations. Then, through multimodal modeling, we learn the complementarity among different representations and obtain a unified video-level description. Finally, the unified representations of ads and videos are utilized to find the best matching result for each advertisement. Our method emphasizes the content similarity between ad and video, which would make the transition between video and ad more natural. Comprehensive experiments with both objective and subjective evaluations demonstrate the effectiveness and user-friendliness of our proposed video-in-video advertising framework. |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2020.2979928 |