Targeted Adversarial Training for Natural Language Understanding
- Lis Pereira ,
- Xiaodong Liu ,
- Hao Cheng ,
- Hoifung Poon ,
- Jianfeng Gao ,
- Ichiro Kobayashi
NAACL |
We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding. The key idea is to introspect current mistakes and prioritize adversarial training steps to where the model errs the most. Experiments show that TAT can significantly improve accuracy over standard adversarial training on GLUE and attain new state-of-the-art zero-shot results on XNLI. Our code will be released at: https://github.com/namisan/mt-dnn.
Téléchargements de publications
Multi-Task Deep Neural Networks for Natural Language Understanding (MT-DNN)
juillet 16, 2019
Multi-task learning toolkit for natural language understanding, including knowledge distillation.