Generating Natural Questions About an Image
- Nasrin Mostafzadeh ,
- Ishan Misra ,
- Jacob Devlin ,
- Margaret Mitchell ,
- Xiaodong He ,
- Lucy Vanderwende
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics |
Published by Association of Computational Linguistics
There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images. These tasks have focused on literal descriptions of the image. To move beyond the literal, we choose to explore how questions about an image are often directed at commonsense inference and the abstract events evoked by objects in the image. In this paper, we introduce the novel task of Visual Question Generation (VQG), where the system is tasked with asking a natural and engaging question when shown an image. We provide three datasets which cover a variety of images from object-centric to event-centric, with considerably more abstract training data than provided to state-of-the-art captioning systems thus far. We train and test several generative and retrieval models to tackle the task of VQG. Evaluation results show that while such models ask reasonable questions for a variety of images, there is still a wide gap with human performance which motivates further work on connecting images with commonsense knowledge and pragmatics. Our proposed task offers a new challenge to the community which we hope furthers interest in exploring deeper connections between vision & language.
Publication Downloads
Visual Question Generation dataset
October 31, 2016
We introduce this dataset in order to support the novel task of Visual Question Generation (VQG), where, given an image, the system should ‘ask a natural and engaging question’. This dataset can be used to support research on common sense reasoning and compute-human conversational systems.