XtremeDistil: Multi-stage Distillation for Massive Multilingual Models
- Subhabrata (Subho) Mukherjee ,
- Ahmed Awadallah
Annual Conference of the Association for Computational Linguistics (ACL 2020) |
Deep and large pre-trained language models are the state-of-the-art for various natural language processing tasks. However, the huge size of these models could be a deterrent to use them in practice. Some recent and concurrent works use knowledge distillation to compress these huge models into shallow ones. In this work we study knowledge distillation with a focus on multi-lingual Named Entity Recognition (NER). In particular, we study several distillation strategies and propose a stage-wise optimization scheme leveraging teacher internal representations that is agnostic of teacher architecture and show that it outperforms strategies employed in prior works. Additionally, we investigate the role of several factors like the amount of unlabeled data, annotation resources, model architecture and inference latency to name a few. We show that our approach leads to massive distillation of multilingual BERT -like teacher models by upto 35x in terms of parameter compression and 51x in terms of latency speedup for batch inference while retaining 95% of its F1-score for NER over 41 languages.
论文与出版物下载
XtremeDistil
22 4 月, 2022
XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale.