2018年7月27日

At the bleeding edge of Intelligent Edges

地点: Redmond, Washington

Organizers:
Ganesh Ananthanarayanan
Victor Bahl

Venue: Microsoft Research Building 99
14820 NE 36th Street
Room 1919
Redmond, Washington

The intelligent edge revolution!

(All times are in PST)

9:00 AM–9:10 AM Opening Remarks
Ganesh Ananthanarayanan (opens in new tab)
Victor Bahl (opens in new tab)
 
9:10 AM–9:30 AM 10 years in the making, where are we?
Victor Bahl (opens in new tab), Distinguished Scientist, Microsoft Research
 
9:30 AM-10:15 AM Keynote: Realizing the intelligent edge, from lite and nimble to heavy and ultra fast
Rashmi Misra, General Manager IOT & AI Solutions, Partner Device Solutions, Microsoft
Dr Rashmi Misra is General Manager of IOT & AI Solutions, in the Partner Device Solutions division of Microsoft. Working closely with ecosystem partners, Rashmi is responsible for proving forefront solutions combining IoT, AI and Edge as well as other emergent technologies at Microsoft. She has been at the forefront of telecoms and IP media for over 20 years, managing network applications, cloud, media, IoT and M2M businesses for HPE during the previous 6 years, where most recently she was the Head of AI Solutions for High Performance Compute solutions. Rashmi previously led Motorola’s worldwide System Integration portfolio and was instrumental in deploying some of the first expert and AI systems in 2G, 3G and 4G networks. She owns six international patents for intelligent optimization, automated network health and monetization of traffic on telecom networks. She has a doctorate in Artificial Intelligence on Multi-Agent Reasoning Systems, from Exeter University, UK and an MBA from Warwick University, UK.
 
 
10:15 AM-10:30 AM Coffee Break
 
 
10:30 AM-11:45 AM Session I : Video Analytics
 
Using Edge Computing for Privacy in Real-Time Video Analytics
Speaker: Mahadev Satyanarayanan (opens in new tab), Carnegie Mellon University
 
Live video offers several advantages relative to other sensing modalities. It is flexible and open-ended: new image and video processing algorithms can extract new information from existing video streams. It offers high resolution, wide coverage, and low cost relative to other sensing modalities. Its passive nature means that a participant does not have to wear a special device, install an app, or do anything special. He or she merely has to be visible to a camera. Privacy is clearly a major concern with video in public spaces. In this talk, I will describe how Edge Computing can be used to denature live video thereby making it “safe” from a privacy point of view. Using OpenFace, our new open-source face recognition system that approaches state-of-the-art accuracy, we are able to selectively obscure faces according to user-specified policies at full frame rate. This enables privacy management for live video analytics while providing a secure approach for handling retrospective policy exceptions.


 
Live Video Analytics – the “killer app” for edge computing!
Speaker: Ganesh Ananthanarayanan (opens in new tab), Microsoft Research
 
Video cameras are pervasively deployed for security and smart city scenarios, with millions of them in large cities worldwide. Our position is that a hierarchical architecture of public clouds, private clusters and edges extending all the way down to compute at the cameras is the only viable approach that can meet the strict requirements of live and large-scale video analytics. We believe that cameras represent the most challenging of “things” in Internet-of-Things, and live video analytics may well represent the killer application for edge computing. In this talk, I’ll describe a system Cascade that optimizes queries on live videos by carefully selecting their “query plan” – implementations (and their knobs) – and placing them across the hierarchy of clusters to maximize average query accuracy. Cascade introduces “dominant demand” to identify the best trade-off between multiple resources and accuracy, and narrows the search space by identifying a “Pareto band” of promising configurations. Deployment results show that Cascade improves accuracy by 25X compared to competing solutions and is within 6% of optimum.


 
Reinventing Video Streaming for Distributed Vision Analytics
Speaker: Aakanksha Chowdhery (opens in new tab), Google
 
Driven by the ubiquity of camera-equipped devices and the prohibitive cost of modern vision techniques, we see a growing need for a custom video streaming protocol that streams videos from cameras to cloud servers to perform neural-network-based video analytics. In the past decade, numerous efforts have optimized video streaming protocols to provide better quality-of-experience to users. However, these streaming protocols do not optimize the analytics quality (accuracy) of vision analytics (deep neural networks). In this talk, I highlight several opportunities to substantially improve the tradeoffs between bandwidth usage and inference accuracy by intelligently leveraging the computation at cameras, edge servers and the cloud.
 
 
11:45 AM-1:00 PM Lunch
 
 
1:00 PM-1:45 PM Keynote: Microsoft IoT & the Intelligent Edge
Rushmi Malaviarachchi, Partner Group Program Manager, Microsoft
Rushmi Malaviarachchi is the Group Program Manager for Windows IoT engineering, delivering Windows to power the intelligent edge in fixed-purpose and IoT devices. This includes kiosks and smart city systems, industrial automation controllers, robots, gateways, and everything in between. He is currently responsible for Windows 10 IoT Core and Windows 10 IoT Enterprise. Rushmi is an experienced product leader who has built and leads great teams to deliver innovative products in a number of areas during his 17-year tenure at Microsoft including 3D, apps, phone, Store, security, and identity.


1:45 PM-3:00 PM Session II : Networks & Wireless
 
Scalable redundant execution at the edge for low-latency Web over cellular networks
Speaker: Sanjay Rao (opens in new tab), Purdue University
 
Application latencies directly impact business revenue and by some estimates, every 100 ms of latency costs 1% in sales for popular e-commerce sites. Low latency is also critical for many new emerging interactive applications. Yet, there is a 6X gap in web download times today between cell devices and desktops. These challenges stem from the mismatch between the Web download process, and cellular networks. In this talk, I will begin by presenting PARCEL, a cloud-based proxy system that we have developed that can tackle this challenge by parsing and executing Web page code redundantly to identify and proactively push objects needed by the client. While PARCEL can achieve significant latency speedup, deploying such proxies at carrier-scale to millions of users requires that the computational overheads of the approach be economized. Addressing computational bottlenecks is critical to enabling edge deployments which translate to further latency reductions. To this end, I will present Nutshell, another system that we have developed which reduces the computational overheads at the proxy through a new technique called ‘whittling’, which is related to program slicing. Evaluations of prototype implementations in live LTE settings with 78 top Alexa Web pages show that Nutshell can not only achieve a speed-up in mean page load times of 1.5x over the recently standardized HTTP/2, but also sustain 27% more requests per second than a proxy performing fully redundant execution.


 
Enabling Edge Computing with Cryptographically Hardened Data Carriers
Speaker: John Kubiatowicz (opens in new tab), University of California at Berkeley
 
Today’s exciting new world of data-driven technology, AI, and cyber-physical systems rely on information from widely disparate sources, combining processing and storage in the cloud with embedded sensors, actuators, and processing at the edge. This information includes both the inputs and outputs of computations as well as multi-versioned executables that drive such computations. Edge computing is essential, since sensors at the edge of the network (e.g., cameras) generate huge volumes of data that is ideally processed or distilled locally. Further, control loops with sensors, computing, and actuators require local communication for responsiveness and quality of service (QoS). Unfortunately, the edge of the network consists of islands of trusted hardware interspersed with a vast sea of untrusted hardware. Thus, there is a pressing need for connected systems to uniformly verify the integrity of the information on which they rely and similarly to provide a universal audit-trail for the information that they produce. Further, privacy must be respected by the infrastructure and relinquished only when necessary. To address these needs, we propose a fundamental refactoring of the network around cryptographically hardened bundles of data, called DataCapsules. DataCapsules are the cyberspace equivalent of shipping containers: uniquely named, secured bundles of information transported over a data-centric “narrow-waist” infrastructure called the Global Data Plane (GDP). The GDP partitions the network into Trust Domains (TDs) to allow clients to reason about the trustworthiness of hardware. When combined with trusted computing enclaves, the GDP enables applications to dynamically partition their functionality between the cloud and network edge, yielding better QoS, lower latency, and enhanced privacy without risking information integrity or obscuring its provenance. In this talk, I’ll make a case for DataCapsules and sketch how to build the GDP.


 
The Roaming Edge
Speaker: Suman Banerjee (opens in new tab), University of Wisconsin
 
Edge computing provides a new way to implement services with many unique advantages. While many edge computing solutions have been implemented within different network infrastructures, in this talk, we will explore the “roaming” edge of the Internet — in vehicles — where computing services are increasingly important, and appropriate edge capabilities need to be rolled out. We will describe a few vehicular applications often with significant needs of audio-visual processing, and describe an edge platform for vehicles to meet such goals.
 
 
3:00 PM-3:15 PM Coffee Break
 
 
3:15 PM-4:30 PM Session III : Edge-Cloud Continuum
 
Elevating the Edge to be a Peer of the Cloud
Speaker: Umakishore Ramachandran (opens in new tab), Georgia Institute of Technology
 
Technological forces and novel applications are the drivers that move the needle in systems and networking research, both of which have reached an inflection point. On the technology side, there is a proliferation of sensors in the spaces in which humans live that become more intelligent with each new generation. This opens immense possibilities to harness the potential of inherently distributed multimodal networked sensor platforms (aka Internet of Things – IoT platforms) for societal benefits. On the application side, large-scale situation awareness applications (spanning healthcare, transportation, disaster recovery, and the like) are envisioned to utilize these platforms to convert sensed information into actionable knowledge. The sensors produce data 24/7. Sending such streams to the cloud for processing is sub-optimal for several reasons. First, often there may not be any actionable knowledge in the data streams (e.g., no action in front of a camera), wasting limited backhaul bandwidth to the core network. Second, there is usually a tight bound on latency between sensing and actuation to ensure timely response for situation awareness. Lastly, there may be other non-technical reasons, including sensitivity for the collected data leaving the locale. Sensor sources themselves are increasingly becoming mobile (e.g., self-driving cars). This suggests that provisioning application components that process sensor streams cannot be statically determined but may have to occur dynamically. All the above reasons suggest that processing should take place in a geo-distributed manner near the sensors. Fog/Edge computing envisions extending the utility computing model of the cloud to the edge of the network. We go further and assert that the edge should become a peer of the cloud. This talk is aimed at identifying the challenges in accomplishing the seamless integration of the edge with the cloud as peers. Specifically, we want to raise questions pertaining to (a) frameworks (NOSQL databases, pub/sub systems, distributed programming idioms) for facilitating the composition of complex latency sensitive applications at the edge; (b) geo-distributed data replication and consistency models commensurate with network heterogeneity while being resilient to coordinated power failures; and (c) support for rapid dynamic deployment of application components, multi-tenancy, and elasticity while recognizing that both computational, networking, and storage resources are limited at the edge.


 
Adaptive and Distributed Operator Placement for Streaming Workflows in Edge-Cloud Environments
Speaker: Klara Nahrstedt (opens in new tab), University of Illinois at Urbana-Champaign
 
Internet of Things (IoT) applications generate massive amounts of real-time streaming data. IoT data owners strive to make predictions/inferences from these large streams of data often through applying machine learning, and image processing operations. A typical deployment of such applications includes edge devices to provide processing/storage operations closer to the location where the streaming data is captured. An important challenge for IoT applications is deciding which operations to execute on an edge device and which operations should be carried out in the cloud. In this talk, we discuss a scalable dynamic programming algorithm, called DROPLET, to partition operations in IoT streaming applications across shared edge and cloud resources, while minimizing completion time of the end-to-end operations. We will show on real-world applications that DROPLET finds an efficient partitioning of operations, scales to thousands of operations, and outperforms heuristics in the literature by being 10 times faster in running time while finding partitioning of operations with total completion time that is 20% better for the large applications that we simulated.


 
Supporting Stateful Edge Services by Enforcing Deterministic Behavior
Speaker: Jason Flinn (opens in new tab), University of Michigan
 
In this talk, I will describe our current efforts to build system support for deploying stateful services at the edge. Our key observation is that many optimizations are possible if services are deterministic, i.e., if they always return the same result given the same inputs. Unfortunately, current services are decidedly non-deterministic due to thread scheduling, timing variation from asynchronous I/O, and various dependencies on the platforms on which they run. Our system support makes non-deterministic services behave deterministically. In turn, this allows us to replicate services on multiple edge nodes to improve performance and hide latency spikes due to uneven conditioning. It also allows us to migrate services among edge nodes with minimal downtime and further improve reliability by providing a hot backup of edge services in the cloud.
 
 
4:30 PM-4:40 PM Short Break
 
 
4:40 PM-5:30 PM Session IV : Devices
 
Femtoclouds: Scalable and deployable edge computing from local devices
Speaker: Ellen Zegura (opens in new tab), Georgia Institute of Technology
 
Widely deploying edge-compute resources requires (1) provisioning the load introduced at various locations, (2) dealing with potentially huge initial deployment cost and management expenses, and (3) continuously upgrading deployments to keep up with the increase in demand. The availability of under-utilized mobile and personal computing devices at the edge provides a potential solution to these deployment challenges. We propose taking advantage of clusters of co-located mobile devices to offer an edge computing platform with control coming from the cloud. We propose, design, implement and evaluate the Femtocloud system which provides a dynamic, self-configuring and multi-device mobile cloud out of a cluster of mobile devices. Within the Femtocloud system, we develop a variety of adaptive mechanisms and algorithms to manage the workload on the edge-resources and effectively mask their churn. These mechanisms enabled building a reliable and efficient edge computing service on top of unreliable, voluntary resources.


 
Internet Connectivity for the Next Billion Devices
Speaker: Shyam Gollakota (opens in new tab), University of Washington
 
A grand challenge in computing has been to design interactive devices that can communicate, sense and compute without any batteries. I will present various harvesting and communication technologies that we introduced at UW over the past few years that can transform the way devices communicate with each other and open up possibilities for a whole range of new devices to be connecting ranging from battery-free phones to smart contact lens and even 3D printed plastic objects.

 
 
5:30 PM-6:30 PM Reception