Virchow: A Million-Slide Digital Pathology Foundation Model

  • Eugene Vorontsov ,
  • A. Bozkurt ,
  • Adam Casson ,
  • George Shaikovski ,
  • Michal Zelechowski ,
  • Siqi Liu ,
  • ,
  • Eric Zimmermann ,
  • James Hall ,
  • ,
  • ,
  • Philippe Mathieu ,
  • Alexander van Eck ,
  • Donghun Lee ,
  • Julian Viret ,
  • Eric Robert ,
  • Yi Kan Wang ,
  • Jeremy D. Kun ,
  • Matthew C. H. Le ,
  • Jan H Bernhard ,
  • R. Godrich ,
  • Gerard Oakley ,
  • Ewan Millar ,
  • Matthew G Hanna ,
  • J. Retamero ,
  • William A. Moye ,
  • Razik Yousfi ,
  • Christopher Kanan ,
  • D. Klimstra ,
  • B. Rothrock ,
  • Thomas J. Fuchs

arXiv

Publication | Publication | Publication

The use of artificial intelligence to enable precision medicine and decision support systems through the analysis of pathology images has the potential to revolutionize the diagnosis and treatment of cancer. Such applications will depend on models’ abilities to capture the diverse patterns observed in pathology images. To address this challenge, we present Virchow, a foundation model for computational pathology. Using self-supervised learning empowered by the DINOv2 algorithm, Virchow is a vision transformer model with 632 million parameters trained on 1.5 million hematoxylin and eosin stained whole slide images from diverse tissue and specimen types, which is orders of magnitude more data than previous works. The Virchow model enables the development of a pan-cancer detection system with 0.949 overall specimen-level AUC across 17 different cancer types, while also achieving 0.937 AUC on 7 rare cancer types. The Virchow model sets the state-of-the-art on the internal and external image tile level benchmarks and slide level biomarker prediction tasks. The gains in performance highlight the importance of training on massive pathology image datasets, suggesting scaling up the data and network architecture can improve the accuracy for many high-impact computational pathology applications where limited amounts of training data are available.