Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding
- Pengchuan Zhang ,
- Xiyang Dai ,
- Jianwei Yang ,
- Bin Xiao ,
- Lu Yuan ,
- Lei Zhang ,
- Jianfeng Gao
This paper presents a new Vision Transformer (ViT) architecture Multi-Scale Vision Longformer, which significantly enhances the ViT of \cite{dosovitskiy2020image} for encoding high-resolution images using two techniques. The first is the multi-scale model structure, which provides image encodings at multiple scales with manageable computational cost. The second is the attention mechanism of vision Longformer, which is a variant of Longformer \cite{beltagy2020longformer}, originally developed for natural language processing, and achieves a linear complexity w.r.t. the number of input tokens. A comprehensive empirical study shows that the new ViT significantly outperforms several strong baselines, including the existing ViT models and their ResNet counterparts, and the Pyramid Vision Transformer from a concurrent work \cite{wang2021pyramid}, on a range of vision tasks, including image classification, object detection, and segmentation. The models and source code used in this study will be released to public soon.
论文与出版物下载
Vision Longformer for Object Detection
12 5 月, 2021
This project provides the source code for the object detection part of vision longformer paper.