Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks
- Xiujun Li ,
- Xi Yin ,
- Chunyuan Li ,
- Pengchuan Zhang ,
- Xiaowei Hu ,
- Lei Zhang ,
- Lijuan Wang ,
- Houdong Hu ,
- Li Dong ,
- Furu Wei ,
- Yejin Choi ,
- Jianfeng Gao
ECCV |
Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.
论文与出版物下载
OSCAR
15 5 月, 2020
This repository contains source code necessary to reproduce the results presented in the paper Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. We propose a new cross-modal pre-training method Oscar (Object-Semantics Aligned Pre-training). It leverages object tags detected in images as anchor points to significantly ease the learning of image-text alignments.
AI advances in image captioning: Describing images as well as people do
Image captioning is an interesting problem in the intersection between computer vision and natural language processing, and it has attracted great attention from their respective research communities. Recent image captioning models have achieved impressive results on the tasks where large amounts of paired image-caption training data is available. However, they generalize poorly to images in the wild, where there are a wide variety of visual objects that are unseen in the caption corpora for training. This raises the challenge of Novel Object Captioning (NOC), that is, generating captions to describe novel objects unseen in paired image-caption training data, which is especially pertinent in real-world applications. This webinar will focus on some of the recent vision-language pretraining (VLP) approaches for image captioning. We will cover our…