WILDS: A Benchmark of in-the-Wild Distribution Shifts
- Pang Wei Koh ,
- Shiori Sagawa ,
- Henrik Marklund ,
- Sang Michael Xie ,
- Marvin Zhang ,
- Akshay Balsubramani ,
- Weihua Hu ,
- Michihiro Yasunaga ,
- Richard Lanas Phillips ,
- Irena Gao ,
- Tony Lee ,
- Etienne David ,
- Ian Stavness ,
- Wei Guo ,
- Berton Earnshaw ,
- Imran Haque ,
- Sara Beery ,
- Jure Leskovec ,
- Anshul Kundaje ,
- Emma Pierson ,
- Sergey Levine ,
- Chelsea Finn ,
- Percy Liang
2021 International Conference on Machine Learning |
Distribution shifts — where the training distribution differs from the test distribution — can substantially degrade the accuracy of machine learning (ML) systems deployed in the wild. Despite their ubiquity, these real-world distribution shifts are under-represented in the datasets widely used in the ML community today. To address this gap, we present WILDS, a curated collection of 8 benchmark datasets that reflect a diverse range of distribution shifts which naturally arise in real-world applications, such as shifts across hospitals for tumor identification; across camera traps for wildlife monitoring; and across time and location in satellite imaging and poverty mapping. On each dataset, we show that standard training results in substantially lower out-of-distribution than in-distribution performance, and that this gap remains even with models trained by existing methods for handling distribution shifts. This underscores the need for new training methods that produce models which are more robust to the types of distribution shifts that arise in practice. To facilitate method development, we provide an open-source package that automates dataset loading, contains default model architectures and hyperparameters, and standardizes evaluations. Code and leaderboards are available at this https URL.