At The Bleeding Edge of Intelligent Edges: Session III : Edge-Cloud Continuum
Edge computing is a natural evolution of cloud computing, where server resources, ranging from a credit-card size computers to micro data centers, are placed closer to data and information generation sources. Application and system developers use these resources to enable a new class of latency- and bandwidth-sensitive applications that are not realizable with current mega-scale cloud computing architectures. Industries, ranging from manufacturing to healthcare, are eager to develop real-time control systems that use machine learning and artificial intelligence to improve efficiencies and reduce cost and the Intelligent Edge is in the center of it all. In this one-day workshop we will learn about this new computing paradigm by listening to presentations from some of the leading researchers in our field.
Talk Title: Elevating the Edge to be a Peer of the Cloud
Speaker(s): Umakishore Ramachandran, Georgia Institute of Technology
Abstract: Technological forces and novel applications are the drivers that move the needle in systems and networking research, both of which have reached an inflection point. On the technology side, there is a proliferation of sensors in the spaces in which humans live that become more intelligent with each new generation. This opens immense possibilities to harness the potential of inherently distributed multimodal networked sensor platforms (aka Internet of Things – IoT platforms) for societal benefits. On the application side, large-scale situation awareness applications (spanning healthcare, transportation, disaster recovery, and the like) are envisioned to utilize these platforms to convert sensed information into actionable knowledge. The sensors produce data 24/7. Sending such streams to the cloud for processing is sub-optimal for several reasons. First, often there may not be any actionable knowledge in the data streams (e.g., no action in front of a camera), wasting limited backhaul bandwidth to the core network. Second, there is usually a tight bound on latency between sensing and actuation to ensure timely response for situation awareness. Lastly, there may be other non-technical reasons, including sensitivity for the collected data leaving the locale. Sensor sources themselves are increasingly becoming mobile (e.g., self-driving cars). This suggests that provisioning application components that process sensor streams cannot be statically determined but may have to occur dynamically. All the above reasons suggest that processing should take place in a geo-distributed manner near the sensors. Fog/Edge computing envisions extending the utility computing model of the cloud to the edge of the network. We go further and assert that the edge should become a peer of the cloud. This talk is aimed at identifying the challenges in accomplishing the seamless integration of the edge with the cloud as peers. Specifically, we want to raise questions pertaining to (a) frameworks (NOSQL databases, pub/sub systems, distributed programming idioms) for facilitating the composition of complex latency sensitive applications at the edge; (b) geo-distributed data replication and consistency models commensurate with network heterogeneity while being resilient to coordinated power failures; and (c) support for rapid dynamic deployment of application components, multi-tenancy, and elasticity while recognizing that both computational, networking, and storage resources are limited at the edge.
Talk Title: Adaptive and Distributed Operator Placement for Streaming Workflows in Edge-Cloud Environments
Speaker(s): Klara Nahrstedt, University of Illinois at Urbana-Champaign
Abstract: Internet of Things (IoT) applications generate massive amounts of real-time streaming data. IoT data owners strive to make predictions/inferences from these large streams of data often through applying machine learning, and image processing operations. A typical deployment of such applications includes edge devices to provide processing/storage operations closer to the location where the streaming data is captured. An important challenge for IoT applications is deciding which operations to execute on an edge device and which operations should be carried out in the cloud. In this talk, we discuss a scalable dynamic programming algorithm, called DROPLET, to partition operations in IoT streaming applications across shared edge and cloud resources, while minimizing completion time of the end-to-end operations. We will show on real-world applications that DROPLET finds an efficient partitioning of operations, scales to thousands of operations, and outperforms heuristics in the literature by being 10 times faster in running time while finding partitioning of operations with total completion time that is 20% better for the large applications that we simulated.
Talk Title: Supporting Stateful Edge Services by Enforcing Deterministic Behavior
Speakers(s): Jason Flinn, University of Michigan
Abstract: In this talk, I will describe our current efforts to build system support for deploying stateful services at the edge. Our key observation is that many optimizations are possible if services are deterministic, i.e., if they always return the same result given the same inputs. Unfortunately, current services are decidedly non-deterministic due to thread scheduling, timing variation from asynchronous I/O, and various dependencies on the platforms on which they run. Our system support makes non-deterministic services behave deterministically. In turn, this allows us to replicate services on multiple edge nodes to improve performance and hide latency spikes due to uneven conditioning. It also allows us to migrate services among edge nodes with minimal downtime and further improve reliability by providing a hot backup of edge services in the cloud.
Speaker Bios
- Date:
- Haut-parleurs:
- Umakishore Ramachandran, Klara Nahrstedt, Jason Flinn
- Affiliation:
- University of Michigan
-
-
Ganesh Ananthanarayanan
Senior Principal Researcher
-
-