Research talk: Enhancing the robustness of massive language models via invariant risk minimization
Despite the dramatic recent progress in natural language processing (NLP) afforded by large pretrained language models, important limitations remain. A growing body of work demonstrates that such models are easily fooled by adversarial attacks and have poor out-of-distribution generalization, as they tend to learn spurious, non-causal correlations. This talk explores how to reduce the impact of spurious correlations in large language models based on the so-called invariance principle, which states that only relationships invariant across training environments should be learned. It includes data showing that language models trained via invariant risk minimization (IRM), rather than the traditional expected risk minimization, achieve better out-of-distribution generalization.
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
- 轨迹:
- Causal Machine Learning
- 日期:
- 演讲者:
- Robert West
- 所属机构:
- EPFL
Causal Machine Learning
-
Opening remarks: Causal Machine Learning
Speakers:- Cheng Zhang
-
-
Research talk: Causal ML and business
Speakers:- Jacob LaRiviere
-
-
-
Panel: Challenges and opportunities of causality
Speakers:- Susan Athey,
- Yoshua Bengio,
- Judea Pearl
-
-
Research talk: Causal ML and fairness
Speakers:- Allison Koenecke
-
Panel: Causal ML Research at Microsoft
Speakers:- Daniel McDuff,
- Javier González,
- Justin Ding
-
Research talk: Post-contextual-bandit inference
Speakers:- Nathan Kallus
-
-
-
-
Panel: Causal ML in industry
Speakers:- Ya Xu,
- Totte Harinen,
- Dawen Liang
-