Large Multimodal Model for Real-World Radiology Report Generation
- Brian Nlong Zhao ,
- Xinyang Jiang ,
- Xufang Luo ,
- Yifan Yang ,
- Bo Li ,
- Zilong Wang ,
- Javier Alvarez-Valle ,
- Matthew P. Lungren ,
- Dongsheng Li ,
- Lili Qiu
While automatic report generation has demonstrated promising results using deep learning-based methods, deploying these algorithms in real-world scenarios remains challenging. Compared to conventional report generation, real-world report generation requires model to follow the instruction from the radiologists and consider contextual information. Thus, this paper focuses on developing a practical report generation method that supports real-world clinical practice. To tackle the challenges posed by the limited availability of clinical data, we propose a GPT-based unified data generation pipeline designed to produce high-quality data. Consequently, we present a new benchmark dataset MIMIC-R3G, comprising five representative tasks pertinent to real-world medical report generation. We propose Domain-enhanced Multi-modal Model (DeMMo), where an additional medical domain vision encoder is incorporated into the general domain multimodal LLM to enhance its ability on specific domains. This approach aims to harness the specialized capabilities of the medical domain vision encoder while leveraging the robustness and versatility of the general domain multi-modal LLM. Comprehensive experiments demonstrate that our approach attains competitive performance across all real-world tasks compared to existing interactive report generation frameworks and state-of-the-art encoder-decoder style report generation models.