REACT: Streaming Video Analytics On The Edge With Asynchronous Cloud Support
- Anurag Ghosh ,
- Srinivasan Iyengar ,
- Stephen Lee ,
- Anuj Rathore ,
- Venkat Padmanabhan
8th ACM/IEEE Conference on Internet of Things Design and Implementation (IoTDI) |
Organized by ACM/IEEE
Emerging Internet of Things (IoT) and mobile computing applications are expected to support latency-sensitive deep neural network (DNN) workloads. To realize this vision, the Internet is evolving towards an edge-computing architecture, where computing infrastructure is located closer to the end device to help achieve low latency. However, edge computing may have limited resources compared to cloud environments and thus, cannot run large DNN models that often have high accuracy. In this work, we develop REACT, a framework that leverages cloud resources to execute large DNN models with higher accuracy to improve the accuracy of models running on edge devices. To do so, we propose a novel edge-cloud fusion algorithm that fuses edge and cloud predictions, achieving low latency and high accuracy. We extensively evaluate our approach and show that our approach can significantly improve the accuracy compared to baseline approaches. We focus specifically on object detection in videos (applicable in many video analytics scenarios) and show that the fused edge-cloud predictions can outperform the accuracy of edge-only and cloud-only scenarios by as much as 50%. REACT shows that for Edge AI, the choice between offloading and on-device inference is not binary — redundant execution at cloud and edge locations complement each other when carefully employed.