From Captions to Visual Concepts and Back
- Hao Fang ,
- Saurabh Gupta ,
- Forrest Iandola ,
- Rupesh Srivastava ,
- Li Deng ,
- Piotr Dollar ,
- Jianfeng Gao ,
- Xiaodong He ,
- Margaret Mitchell ,
- John Platt ,
- Larry Zitnick ,
- Geoffrey Zweig
CVPR 2015 |
Published by IEEE - Institute of Electrical and Electronics Engineers
Tied for 1st Prize
Télécharger BibTexThis paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.
© IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.