SAPPHIRE: An Always-on Context-aware Computer Vision System for Portable Devices
- Swagath Venkataramani ,
- Victor Bahl ,
- Xian-Sheng Hua ,
- Jie Liu ,
- Jin Li ,
- Bodhi Priyantha ,
- Matthai Philipose ,
- Shuayb Zarar
IEEE/ACM Conf. Design Automation and Test in Europe (DATE) |
Published by ACM - Association for Computing Machinery
Being aware of objects in the ambient provides a new dimension of context awareness. Towards this goal, we present a system that exploits powerful computer vision algorithms in the cloud by collecting data through always-on cameras on portable devices. To reduce comunication-energy costs, our system allows client devices to continually analyze streams of video and distill out frames that contain objects of interest. Through a dedicated image-classification engine SAPPHIRE, we show that if an object is found in 5% of all frames, we end up selecting 30% of them to be able to detect the object 90% of the time: 70% data reduction on the client device at a cost of 60mW of power (45nm ASIC). By doing so, we demonstrate system-level energy reductions of 2 . Thanks to multiple levels of pipelining and parallel vector-reduction stages, SAPPHIRE consumes only 3.0 mJ/frame and 38 pJ/OP – estimated to be lower by 11.4 than a 45 nm GPU – and a slightly higher level of peak performance (29 vs. 20 GFLOPS). Further, compared to a parallelized sofware implementation on a mobile CPU, it provides a processing speed up of up to 235 (1.81 s vs. 7.7 ms/frame), which is necessary to meet the real-time processing needs of an always-on context-aware system.
© ACM. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version can be found at http://dl.acm.org.