Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers
- Divyat Mahajan ,
- Chenhao Tan ,
- Amit Sharma
CausalML: Machine Learning and Causal Inference for Improved Decision Making Workshop, NeurIPS 2019 |
Explaining the output of a complex machine learning (ML) model often requires approximation using a simpler model. To construct interpretable explanations that are also consistent with the original ML model, counterfactual examples—showing how the model’s output changes with small perturbations to the input—have been proposed. This paper extends the work in counterfactual explanations by addressing the challenge of feasibility of such examples. For explanations of ML models in critical domains such as healthcare and finance, counterfactual examples are useful for an end-user only to the extent that perturbation of feature inputs is feasible in the real world. We formulate the problem of feasibility as preserving causal relationships among input features and present a method that uses (partial) structural causal models to generate actionable counterfactuals. When feasibility constraints may not be easily expressed, we propose an alternative method that optimizes for feasibility as people interact with its output and provide oracle-like feedback. Our experiments on synthetic Bayesian networks and the widely used Adult-Income dataset show that our proposed methods can generate counterfactual explanations that satisfy feasibility constraints.