Loading…

Achieving Human Parity on Visual Question Answering

The Visual Question Answering (VQA) task utilizes both visual image and language analysis to answer a textual question with respect to an image. It has been a popular research topic with an increasing number of real-world applications in the last decade. This paper describes our recent research of A...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-11
Main Authors: Yan, Ming, Xu, Haiyang, Li, Chenliang, Tian, Junfeng, Bi, Bin, Wang, Wei, Chen, Weihua, Xu, Xianzhe, Wang, Fan, Cao, Zheng, Zhang, Zhicheng, Zhang, Qiyu, Zhang, Ji, Huang, Songfang, Huang, Fei, Luo Si, Jin, Rong
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Visual Question Answering (VQA) task utilizes both visual image and language analysis to answer a textual question with respect to an image. It has been a popular research topic with an increasing number of real-world applications in the last decade. This paper describes our recent research of AliceMind-MMU (ALIbaba's Collection of Encoder-decoders from Machine IntelligeNce lab of Damo academy - MultiMedia Understanding) that obtains similar or even slightly better results than human being does on VQA. This is achieved by systematically improving the VQA pipeline including: (1) pre-training with comprehensive visual and textual feature representation; (2) effective cross-modal interaction with learning to attend; and (3) A novel knowledge mining framework with specialized expert modules for the complex VQA task. Treating different types of visual questions with corresponding expertise needed plays an important role in boosting the performance of our VQA architecture up to the human level. An extensive set of experiments and analysis are conducted to demonstrate the effectiveness of the new research work.
ISSN:2331-8422