Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains
- Yunzhi Yao ,
- Shaohan Huang ,
- Wenhui Wang ,
- Li Dong ,
- Furu Wei
ACL-IJCNLP 2021 |
Large pre-trained models have achieved great success in many natural language processing tasks. However, when they are applied in specific domains, these models suffer from domain shift and bring challenges in fine-tuning and online serving for latency and capacity constraints. In this paper, we present a general approach to developing small, fast and effective pre-trained models for specific domains. This is achieved by adapting the off-the-shelf general pre-trained models and performing task-agnostic knowledge distillation in target domains. Specifically, we propose domain-specific vocabulary expansion in the adaptation stage and employ corpus level occurrence probability to choose the size of incremental vocabulary automatically. Then we systematically explore different strategies to compress the large pre-trained models for specific domains. We conduct our experiments in the biomedical and computer science domain. The experimental results demonstrate that our approach achieves better performance over the BERT BASE model in domain-specific tasks while 3.3x smaller and 5.1x faster than BERT BASE. The code and pre-trained models are available at this https URL (opens in new tab).
Publication Downloads
UniLM – Unified Language Model Pre-training
October 1, 2019
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities.