nn-METER: Towards Accurate Latency Prediction of DNN Inference on Diverse Edge Devices
- Li Lyna Zhang ,
- Shihao Han ,
- Jianyu Wei ,
- Ningxin Zheng ,
- Ting Cao ,
- Yuqing Yang ,
- Yunxin Liu
GetMobile: Mobile Computing and Communications | , Vol 25(4): pp. 19-23
SigMobile Research Highlight
Download BibTexInference latency has become a crucial metric in running Deep Neural Network (DNN) models on various mobile and edge devices. To this end, latency prediction of DNN inference is highly desirable for many tasks where measuring the latency on real devices is infeasible or too costly. Yet it is very challenging and existing approaches fail to achieve a high accuracy of prediction, due to the varying model-inference latency caused by the runtime optimizations on diverse edge devices. In this paper, we propose and develop nn-Meter, a novel and efficient system to accurately predict the DNN inference latency on diverse edge devices. The key idea of nn-Meter is dividing a whole model inference into kernels, i.e., the execution units on a device, and conducting kernel-level prediction. nn-Meter builds atop two key techniques: (i) kernel detection to automatically detect the execution unit of model inference via a set of well-designed test cases; and (ii) adaptive sampling to efficiently sample the most beneficial configurations from a large space to build accurate kernel-level latency predictors. nn-Meter achieves significant high prediction accuracy on four types of edge devices.