Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression
- Canwen Xu ,
- Wangchunshu Zhou ,
- Tao Ge ,
- Ke Xu ,
- Julian McAuley ,
- Furu Wei
2021 Empirical Methods in Natural Language Processing |
Recent studies on compression of pretrained language models (e.g., BERT) usually use preserved accuracy as the metric for evaluation. In this paper, we propose two new metrics, label loyalty and probability loyalty that measure how closely a compressed model (i.e., student) mimics the original model (i.e., teacher). We also explore the effect of compression with regard to robustness under adversarial attacks. We benchmark quantization, pruning, knowledge distillation and progressive module replacing with loyalty and robustness. By combining multiple compression techniques, we provide a practical strategy to achieve better accuracy, loyalty and robustness.