Vision Language Models for Spreadsheet Understanding: Challenges and Opportunities
- Shiyu Xia ,
- Junyu Xiong ,
- Haoyu Dong ,
- Jianbo Zhao ,
- Yuzhang Tian ,
- Mengyu Zhou ,
- Yeye He ,
- Shi Han ,
- Dongmei Zhang
Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR) at ACL'2024 |
This paper explores capabilities of Vision Language Models on spreadsheet comprehension. We propose three self-supervised challenges with corresponding evaluation metrics to comprehensively evaluate VLMs on Optical Character Recognition (OCR), spatial perception, and visual format recognition. Additionally, we utilize the spreadsheet table detection task to assess the overall performance of VLMs by integrating these challenges. To probe VLMs more finely, we propose three spreadsheet-to-image settings: column width adjustment, style change, and address augmentation.
We propose variants of prompts to address the above tasks in different settings. Notably, to leverage the strengths of VLMs in understanding text rather than two-dimensional positioning, we propose to decode cell values on the four boundaries of the table in spreadsheet boundary detection. Our findings reveal that VLMs demonstrate promising OCR capabilities but produce unsatisfactory results due to cell omission and misalignment, and they notably exhibit insufficient spatial and format recognition skills, motivating future work to enhance VLMs’ spreadsheet data comprehension capabilities using our methods to generate extensive spreadsheet-image pairs in various settings.