Empowering In-Browser Deep Learning Inference on Edge Devices with Just-in-Time Kernel Optimizations
- Fucheng Jia ,
- Shiqi Jiang ,
- Ting Cao ,
- Wei Cui ,
- Tianrui Xia ,
- Xu Cao ,
- Yuanchun Li ,
- Qipeng Wang ,
- Deyu Zhang ,
- Ju Ren ,
- Yunxin Liu ,
- Lili Qiu ,
- Mao Yang
the 22nd Annual International Conference on Mobile Systems, Applications and Services (MobiSys'24) |
Published by ACM
Web is increasingly becoming the primary platform to deliver AI services onto edge devices, making in-browser deep learning (DL) inference more prominent. Nevertheless, the heterogeneity of edge devices, combined with the underdeveloped state of Web hardware acceleration practices, hinders current in-browser inference from achieving its full performance potential on target devices.
To address this issue, this paper presents the pioneering inbrowser inference system, nnJIT, which enables just-in-time (JIT) auto-generation of optimized computing kernels for edge devices. nnJIT is built upon two novel techniques that significantly reduce kernel search and compilation overhead while improving performance firmly: Tensor-Web Compiling Co-Design lowers compiling costs by around 100× through eliminating redundant and ineffective compiling passes; Web-Specific Lite Kernel Optimization Space reduces kernel tuning costs by focusing on Web programming requirements and efficient device resource utilization, pruning the optimization space from millions to only dozens.
nnJIT is evaluated for modern models, e.g., BART, T5, and Llama 2, on a range of edge devices including laptops and smartphones using different browsers and hardware from ARM, Intel, AMD and Nvidia. The results show that nnJIT can achieve up to 8.2× faster within 30 seconds compared to the existing baselines.