Token-wise Curriculum Learning for Neural Machine Translation
- Chen Liang ,
- Haoming Jiang ,
- Xiaodong Liu ,
- Pengcheng He ,
- Wei Chen ,
- Jianfeng Gao ,
- Tuo Zhao
2021 Empirical Methods in Natural Language Processing |
Existing curriculum learning approaches to Neural Machine Translation (NMT) require sampling sufficient amounts of «easy» samples from training data at the early training stage. This is not always achievable for low-resource languages where the amount of training data is limited. To address such limitation, we propose a novel token-wise curriculum learning approach that creates sufficient amounts of easy samples. Specifically, the model learns to predict a short sub-sequence from the beginning part of each target sentence at the early stage of training, and then the sub-sequence is gradually expanded as the training progresses. Such a new curriculum design is inspired by the cumulative effect of translation errors, which makes the latter tokens more difficult to predict than the beginning ones. Extensive experiments show that our approach can consistently outperform baselines on 5 language pairs, especially for low-resource languages. Combining our approach with sentence-level methods further improves the performance on high-resource languages.