PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
- Qingru Zhang ,
- Simiao Zuo ,
- Chen Liang ,
- Alexander Bukharin ,
- Pengcheng He ,
- Weizhu Chen ,
- Tuo Zhao
Organized by Microsoft
Large Transformer-based models have exhibited superior performance in various natural language processing and computer vision tasks. However, these models contain enormous amounts of parameters, which restrict their deployment to real-world applications. To reduce the model size, researchers prune these models based on the weights’ importance scores. However, such scores are usually estimated on mini-batches during training, which incurs large variability/uncertainty due to mini-batch sampling and complicated training dynamics. As a result, some crucial weights could be pruned by commonly used pruning methods because of such uncertainty, which makes training unstable and hurts generalization. To resolve this issue, we propose UNITE, which captures the variability of importance scores by quantifying the uncertainty of importance estimation. In particular, for the weights with low importance scores but high uncertainty, UNITE tends to retain them and explore their capacity. We conduct extensive experiments with several Transformer-based models on natural language understanding, question answering and image classification to validate the effectiveness of UNITE. Results demonstrate that UNITE manifests notable improvement under different sparsity levels. Our code will be publicly available.