Least Privilege Rendering in a 3D Web Browser
- John Vilk ,
- David Molnar ,
- Eyal Ofek ,
- Chris Rossbach ,
- Ben Livshits ,
- Alexander Moshchuk ,
- Helen J. Wang ,
- Ran Gal
Microsoft Research Technical Report MSR-TR-2014-25
Emerging platforms such as Kinect, Epson Moverio, or Meta SpaceGlasses enable immersive experiences, where applications display content on multiple walls and multiple devices, detect objects in the world, and display content near those objects. App stores for these platforms enable users to run applications from third parties. Unfortunately, to display content properly near objects and on room surfaces, these applications need highly sensitive information, such as video and depth streams from the room, thus creating a serious privacy problem for app users.
To solve this problem, we introduce two new abstractions enabling least privilege interactions of apps with the room. First, a room skeleton that provides least privilege for rendering, unlike previous approaches that focus on inputs alone. Second, a detection sandbox that allows registering content to show if an object is detected, but prevents the application from knowing if the object is present.
To demonstrate our ideas, we have built SurroundWeb, a 3D browser that enables web applications to use object recognition and room display capabilities with our least privilege abstractions. We used SurroundWeb to build applications for immersive presentation experiences, karaoke, etc. To assess the privacy of our approach, we used user surveys to demonstrate that the information revealed by our abstractions is acceptable. SurroundWeb does not lead to unacceptable runtime overheads: after a one-time setup procedure that scans a room for projectable surfaces in about a minute, our prototype can render immersive multi-display web rooms at greater than 30 frames per second with up to 25 screens and up to a 1,440×720 display.