Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care

  • Anna-Lena Popkes ,
  • Hiske Overweg ,
  • Ari Ercole ,
  • Yingzhen Li ,
  • Jose Miguel Hern´andez-Lobato ,
  • ,
  • Cheng Zhang

https://arxiv.org/abs/1905.02599

Clinical decision making is challenging because of pathological complexity, as well as large amounts of heterogeneous data generated as part of routine clinical care. In recent years, machine learning tools have been developed to aid this process. Intensive care unit (ICU) admissions represent the most data dense and time-critical patient care episodes. In this context, prediction models may help clinicians determine which patients are most at risk and prioritize care. However, flexible tools such as artificial neural networks (ANNs) suffer from a lack of interpretability limiting their acceptability to clinicians. In this work, we propose a novel interpretable Bayesian neural network architecture which offers both the flexibility of ANNs and interpretability in terms of feature selection. In particular, we employ a sparsity inducing prior distribution in a tied manner to learn which features are important for outcome prediction. We evaluate our approach on the task of mortality prediction using two real-world ICU cohorts. In collaboration with clinicians we found that, in addition to the predicted outcome results, our approach can provide novel insights into the importance of different clinical measurements. This suggests that our model can support medical experts in their decision making process.

论文与出版物下载

Interpretable Outcome Prediction with Sparse Bayesian Neural Networking

9 8 月, 2019

A type of Bayesian Neural Network which has a sparsity-inducing prior distribution in order to help interpret the learned weights. Particularly useful in the domain of healthcare but scales to many other domains, as it enables interpretability of neural network models.