Current AI models can learn spurious correlation during data-fitting process, which can incur the failure of generalizing to new unseen domains. To resolve this problem, we resort to causal inference, with the expectation to learning causal relation that is invariant and stable in any environment. Our departure to the traditional transfer learning lies in the causal perspective, that is, our goal is discovering and exploiting causal relations for out-of-distribution generalization. We will apply our models to safety-critical tasks, such as healthcare and security.
人员
Chang Liu
Senior Researcher