Loading…

Visual question answering: A survey of methods and datasets

•A comprehensive review of the state of the art on the emerging task of visual question answering•Review the growing number of datasets, highlighting their distinct characteristics•An in-depth analysis of the questions/answers provided in the recently-released Visual Genome dataset. Visual Question...

Full description

Saved in:
Bibliographic Details
Published in:Computer vision and image understanding 2017-10, Vol.163, p.21-40
Main Authors: Wu, Qi, Teney, Damien, Wang, Peng, Shen, Chunhua, Dick, Anthony, van den Hengel, Anton
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A comprehensive review of the state of the art on the emerging task of visual question answering•Review the growing number of datasets, highlighting their distinct characteristics•An in-depth analysis of the questions/answers provided in the recently-released Visual Genome dataset. Visual Question Answering (VQA) is a challenging task that has received increasing attention from both the computer vision and the natural language processing communities. Given an image and a question in natural language, it requires reasoning over visual elements of the image and general knowledge to infer the correct answer. In the first part of this survey, we examine the state of the art by comparing modern approaches to the problem. We classify methods by their mechanism to connect the visual and textual modalities. In particular, we examine the common approach of combining convolutional and recurrent neural networks to map images and questions to a common feature space. We also discuss memory-augmented and modular architectures that interface with structured knowledge bases. In the second part of this survey, we review the datasets available for training and evaluating VQA systems. The various datatsets contain questions at different levels of complexity, which require different capabilities and types of reasoning. We examine in depth the question/answer pairs from the Visual Genome project, and evaluate the relevance of the structured annotations of images with scene graphs for VQA. Finally, we discuss promising future directions for the field, in particular the connection to structured knowledge bases and the use of natural language processing models.
ISSN:1077-3142
1090-235X
DOI:10.1016/j.cviu.2017.05.001