Profiling and optimizing deep learning inference on mobile GPUs
- Shiqi Jiang ,
- Lihao Ran ,
- Ting Cao ,
- Yusen Xu ,
- Yunxin Liu
Proceedings of the 11th ACM SIGOPS Asia-Pacific Workshop on Systems (APSys '20) |
Mobile GPU, as the ubiquitous computing hardware on almost every smartphone, is being exploited for the deep learning inference. In this paper, we present our measurements on the inference performance with mobile GPUs. Our observations suggest that mobile GPUs are underutilized. We study the inefficient issue in depth and find that one of root causes is the improper partition of compute workload. To solve this, we propose a heuristics-based workload partitioning approach, considering both performance and overheads on mobile devices. Evaluation results show that our approach reduces the inference latency by up to 32.8% on various devices and neural networks.