Improving Multilingual Translation by Representation and Gradient Regularization
- Yilin Yang ,
- Akiko Eriguchi ,
- Alexandre Muzio ,
- Prasad Tadepalli ,
- Stefan Lee ,
- Hany Hassan Awadalla
EMNLP 2021 (The 2021 Conference on Empirical Methods in Natural Language Processing) |
Published by Association for Computational Linguistics
Multilingual Neural Machine Translation (NMT) enables one model to serve all translation directions, including ones that are unseen during training, i.e., zero-shot translation. Despite being theoretically attractive, current models often produce low quality translations commonly failing to even produce outputs in the right target language. In this work, we observe that off-target translation is dominant even in strong multilingual systems, trained on massive multilingual corpora.
To address this issue, we propose a joint approach to regularize NMT models at both representation-level and gradient-level. At the representation level, we leverage an auxiliary target language prediction task to regularize decoder outputs to retain information about the target language. At the gradient level, we leverage a small amount of direct data (in thousands of sentence pairs) to regularize model gradients. Our results demonstrate that our approach is highly effective in both reducing off-target translation occurrences and improving zero-shot translation performance by +5.59 and +10.38 BLEU on WMT and OPUS datasets respectively. Moreover, experiments show that our method also works well when the small amount of direct data is not available.