Loading…

Visual question answering model based on visual relationship detection

visual question answering (VQA) is a learning task involving two major fields of computer vision and natural language processing. The development of deep learning technology has contributed to the advancement of this research area. Although the research on the question answering model has made great...

Full description

Saved in:
Bibliographic Details
Published in:Signal processing. Image communication 2020-02, Vol.80, p.115648, Article 115648
Main Authors: Xi, Yuling, Zhang, Yanning, Ding, Songtao, Wan, Shaohua
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:visual question answering (VQA) is a learning task involving two major fields of computer vision and natural language processing. The development of deep learning technology has contributed to the advancement of this research area. Although the research on the question answering model has made great progress, the low accuracy of the VQA model is mainly because the current question answering model structure is relatively simple, the attention mechanism of model is deviated from human attention and lacks a higher level of logical reasoning ability. In response to the above problems, we propose a VQA model based on multi-objective visual relationship detection. Firstly, the appearance feature is used to replace the image features from the original object, and the appearance model is extended by the principle of word vector similarity. The appearance features and relationship predicates are then fed into the word vector space and represented by a fixed length vector. Finally, through the concatenation of elements between the image feature and the question vector are fed into the classifier to generate an output answer. Our method is benchmarked on the DQAUAR data set, and evaluated by the Acc WUPS@0.0 and WUPS@0.9. •Judgment of the interrelations between the objects is added in traditional VQA visual tasks.•The principle of word vector similarity is introduced in judgment of interrelations.•Attention mechanism guided by problematic words is added to guide the attention to specific regions.
ISSN:0923-5965
1879-2677
DOI:10.1016/j.image.2019.115648