Evaluation of Deep Learning to Augment Image Guided Radiotherapy for Head and Neck and Prostate Cancers
- Ozan Oktay ,
- Jay Nanavati ,
- Anton Schwaighofer ,
- David Carter ,
- Melissa Bristow ,
- Ryutaro Tanno ,
- Gill Barnett ,
- David Noble ,
- Yvonne Rimmer ,
- Rajesh Jena ,
- Ben Glocker ,
- Kenton O'Hara ,
- Christopher Bishop ,
- Javier Alvarez-Valle ,
- Aditya Nori
JAMA |
IMPORTANCE: Personalized radiotherapy planning depends on high-quality delineation of target tumors and surrounding organs at risk (OARs). This process puts additional time burdens on oncologists and introduces variability among both experts and institutions.
OBJECTIVE: To explore clinically acceptable auto-contouring solutions that can be integrated into existing workflows and used in different domains of radiotherapy.
DESIGN, SETTING, AND PARTICIPANTS: This quality improvement study used a multi-center imaging data set comprising 519 pelvic and 242 head and neck computed tomography (CT) scans from 8 distinct clinical sites and patients diagnosed either with prostate or head and neck cancer. The scans were acquired as part of treatment dose planning from patients who received intensity modulated radiation therapy between October 2013 and February 2020. Fifteen different OARs were manually annotated by expert readers and radiation oncologists. The models were trained on a subset of the data set to automatically delineate OARs and evaluated on both internal and external data sets. Data analysis was conducted October 2019 to September 2020.
MAIN OUTCOMES AND MEASURES: The auto-contouring solution was evaluated on external datasets, and its accuracy was quantified with volumetric agreement and surface distance measures. Models were benchmarked against expert annotations in an interobserver variability (IOV) study. Clinical utility was evaluated by measuring time spent on manual corrections and annotations from scratch.
RESULTS: A total of 519 participants’ (519 [100%] men; 390 [75%] aged 62-75 years) pelvic CT images and 242 participants’ (184 [76%] men; 194 [80%] aged 50-73 years) head and neck CT images were included. The models achieved levels of clinical accuracy within the bounds of expert IOV for 13 of 15 structures (eg, left femur, κ = 0.982; brainstem, κ = 0.806) and performed consistently well across both external and internal data sets (eg, mean [SD] Dice score for left femur, internal vs external data sets: 98.52% [0.50] vs 98.04% [1.02]; P = .04). The correction time of auto-generated contours on 10 head and neck and 10 prostate scans was measured as a mean of 4.98 (95% CI, 4.44-5.52) min/scan and 3.40 (95% CI, 1.60-5.20) min/scan, respectively, to ensure clinically accepted accuracy, whereas contouring from scratch on the same head and neck scans was observed to be 73.25 (95% CI, 68.68-77.82) min/scan for a radiation oncologist and 86.75 (95% CI, 75.21-92.29) min/scan for an expert reader, accounting for a 93% reduction in time.
CONCLUSIONS AND RELEVANCE: In this study, the models achieved levels of clinical accuracy within expert IOV while reducing manual contouring time and performing consistently well across previously unseen heterogeneous data sets. With the availability of open-source libraries and reliable performance, this creates significant opportunities for the transformation of radiation treatment planning.
论文与出版物下载
InnerEye – Deep Learning
22 9 月, 2020
This is a deep learning toolbox to train models on medical images (or more generally, 3D images). It integrates seamlessly with cloud computing in Azure.
Project InnerEye: Augmenting cancer radiotherapy workflows with deep learning and open source
Medical images offer vast opportunities to improve clinical workflows and outcomes. Specifically, in the context of cancer radiotherapy, clinicians need to go through computer tomography (CT) scans and manually segment (contour) anatomical structures. This is an extremely time-consuming task that puts a large burden on care providers. Deep learning (DL) models can help with these segmentation tasks. However, more understanding is needed regarding these models’ clinical utility, generalizability, and safety in existing workflows. Building these models also requires techniques that are not easily accessible to researchers and care providers. In this webinar, Dr. Ozan Oktay and Dr. Anton Schwaighofer will analyze these challenges within the context of image-guided radiotherapy procedures and will present the latest research outputs of Project InnerEye in tackling these challenges. The first part…