KAT: A Knowledge Augmented Transformer for Vision-and-Language
- Liangke Gui ,
- Borui Wang ,
- Qiuyuan Huang ,
- Alex Hauptmann ,
- Yonatan Bisk ,
- Jianfeng Gao
NAACL 2022. Long paper, Oral. |
Published by arXiv:2112.08614
No.1 of OK-VQA Leaderboard (2021).
Download BibTexThe primary focus of recent work with large scale transformers has been on optimizing the amount of information packed into the model’s parameters. In this work, we ask a different question: Can multimodal transformers leverage explicit knowledge in their reasoning? Existing, primarily unimodal, methods have explored approaches under the paradigm of knowledge retrieval followed by answer prediction but leave open questions about the quality and relevance of the retrieved knowledge used, and how the reasoning processes over implicit and explicit knowledge should be integrated. To address these challenges, we propose a novel model – Knowledge Augmented Transformer (KAT) – which achieves a strong state-of-the-art result (+6 points absolute) on the open-domain multimodal task of OK-VQA. Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation. An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.