Scaling Clinical Trial Matching Using Large Language Models: A Case Study in Oncology
- Cliff Wong ,
- Sheng Zhang ,
- Yu Gu ,
- Christine Moung ,
- Jacob Abel ,
- Naoto Usuyama ,
- Roshanthi Weerasinghe ,
- Brian Piening ,
- Tristan Naumann ,
- Carlo Bifulco ,
- Hoifung Poon
Clinical trial matching is a key process in health delivery and discovery. In practice, it is plagued by overwhelming unstructured data and unscalable manual processing. In this paper, we conduct a systematic study on scaling clinical trial matching using large language models (LLMs), with oncology as the focus area. Our study is grounded in a clinical trial matching system currently in test deployment at a large U.S. health network. Initial findings are promising: out of box, cutting-edge LLMs, such as GPT-4, can already structure elaborate eligibility criteria of clinical trials and extract complex matching logic (e.g., nested AND/OR/NOT). While still far from perfect, LLMs substantially outperform prior strong baselines and may serve as a preliminary solution to help triage patient-trial candidates with humans in the loop. Our study also reveals a few significant growth areas for applying LLMs to end-to-end clinical trial matching, such as context limitation and accuracy, especially in structuring patient information from longitudinal medical records.