For those of us who have transitioned to working from home over the course of the last year, we must navigate a strange new manifestation of mobility.
Far-flung colleagues appear almost magically in grid format on a screen right in front of our faces, despite their remote locations. Yet at the same time, a document, presentation, piece of content, or part of a running application already at our fingertips is awkward to share with others on the same video call.
It’s a paradoxical science fiction world where far is near, and the close-at-hand dilates impossibly beyond our reach. Perhaps these surreal distortions of time and space explain in part why “video-conferencing syndrome” feels so draining.
And even as we’re stranded within the confines of our improvised home offices, we’re somehow supposed to navigate this otherworldly place—a jumbled chaos terrain of home and work, personal and professional, private and semi-public.
Moving between these realities, sometimes moment by moment, makes us nimble in a way we’ve never experienced before: our activity is mobile even as we stay put in the same location. We work in the same physical spaces, but as we navigate these transitions, we’re not in the same human places.
While 2020 has accelerated this trend, perhaps it’s inevitable—and indeed, as we point out below, in many ways this strange new mobility has been a long time coming.
Microsoft researchers are creating technologies to help people succeed in this new way of life. We are working to develop systems that help us to navigate these changes. A new world where these transitions feel less strange—and more empowering. A place appropriate to our current task, locality, and context, where “mobility” means technology that rises to the universal human need to connect and work with others seamlessly.
To that end, Microsoft researchers have published three papers—two of which appear at this year’s ACM Symposium on User Interface Software and Technology (UIST 2020)—on new technologies that redefine how we interpret this concept of place.
The first explores SurfaceFleet, a system to decouple computing from individual devices and places. The second presents Ambrosia, a system which uses resilient distributed programming techniques to unbind running programs and their state from any particular device (CPU). The third circles back on this notion of place, showing how nuanced social cues such as tilt and orientation of a display, on the adjustable Microsoft Surface Studio, can support elegant and natural transitions between different tasks and ways of using such a display.
SurfaceFleet: Mobility as transitions of user activity from one ‘place’ to another
What the Fleet system is in brief:
- A distributed system leveraging a robust, performant declarative database foundation and building on the Ambrosia runtime
- An exploration of novel implications for migration of user experiences across devices
- A platform for Applets: lightweight, distributed user interface elements that unbind interactions from devices, applications, users, and time
- A collaboration tool enabling people to work across devices and act at synchronous or asynchronous times.
The COVID-19 pandemic has brought the collision of home and work—of activity in limited physical spaces that must transition between different human places—to a critical juncture.
But two trends, manifested over the past decade already, are influencing the future of human experiences with computing technology.
The first trend concerns hardware and systems architecture. With Moore’s Law at an end, yet networking and storage exhibiting exponential gains, the future appears to favor systems that emphasize seamless mobility of data—favoring techniques that consume network and storage bandwidth rather than those using a particular CPU. Accelerated by pervasive cloud services and 5G, these computational shifts show no sign of slowing.
The second trend is one of human behavior. People now interact with more devices than ever before. Modern information work increasingly relies on multi-device workflows and distributed workspaces, with connected and interdependent devices—smartphones, desktops, tablets, and perhaps even emerging new form factors. The problem here lies in that transitioning from device to device, and more broadly place to place, can cost us precious efficiency or resources, like time or attention, leaving our activities marooned on islands of glass instead of creating a new interconnected world.
What’s needed is an ecosystem of technologies that seamlessly transitions from place to place, whether that “place” takes the form of a literal location, a different device form factor, the presence of a collaborator, or the availability of the pieces of information needed to complete a particular task at a given time. Such a “Society of Technologies” favors techniques that establish meaningful relationships between the members of this society, rather than with any particular device, to afford mobility of user activity from one place to another, in a very general sense of the word.
Through this lens, we can view the essence of mobility as the transition of user activity from one place to another. SurfaceFleet is a working system, development toolkit, and user experience that explores some implications of these challenges by decoupling computation—including its representation in the graphical user interface—from the current device.
Yet, once user interface mechanisms are decoupled from a single device, we discovered that this also has interesting carry-on implications for unbinding interaction from the current application, the current user, and the current time, as well. The Fleet system handles transitions in place—bridging the resulting gaps—across all four of these dimensions. See the embedded video above for demonstrations that show how user interfaces can “float” above the screen and transcend program state that is confined to the current device.
But authoring distributed programs is difficult and requires considerable expertise. How do we reimagine this notion of “device” that is so deeply baked into current development practices? This is where our journey crosses paths with a new distributed-systems technology known as Ambrosia.
Microsoft research podcast
Ambrosia: Programming as if failure doesn’t matter
What the Ambrosia runtime does in brief:
- Introduces the notion of “virtual resiliency,” which allows programmers of distributed applications to program as if failure doesn’t matter
- Facilitates recovery and replay of logged messages that include mechanisms to correctly handle non-determinism
- Achieves highly performant remote procedure calls through database techniques such as batching, high-performance log writing, high-performance serialization concepts, and group commit strategies
- Provides the technical foundation of the Fleet system
Programmers face complex decisions and coding tasks when coping with failure in distributed systems—especially when applications modify state that is shared across devices. Unfortunately, a lot can go wrong even in simple scenarios of passing messages between distributed services. Connections can drop. Distributed clients can crash at any moment. A remote procedure call (RPC) might even fail just as it sends a remote message, creating uncertainty of what has been sent or received that must be reconciled. All of these cases and error conditions must be correctly anticipated, handled, and implemented correctly, and in an efficient manner. This is why distributed services are so hard to program and deploy correctly.
But using Ambrosia, a developer can write the code for their client as if failure doesn’t matter. We call this virtual resiliency, similar to virtual memory, where one can author programs as if limits on physical memory don’t exist.
The developer simply wraps their service (“Alice” in the figure above) in the Ambrosia runtime. Ambrosia intercepts each message and logs it to resilient storage before sending it over the network via RPC. Whenever a remote service (“Bob”) responds, Ambrosia likewise logs these return messages before Alice’s code acts on their contents.
Ambrosia encapsulates the many possible failure conditions, factoring all the distributed-systems complexity of the resulting client code. If Alice goes down, Ambrosia automatically recovers by replaying the log, allowing Alice’s code to pick up where it left off. Likewise, if Bob crashes, the system can automatically recover from that, too, so long as Bob is wrapped in the Ambrosia runtime, as well. And since network connection state is also logged to resilient storage in the cloud via Azure, we also can automatically self-heal disruptions such as intermittent connections or changing network addresses via a subsystem known as the Common Runtime for Applications (CRA).
Programming distributed applications in this way, as if failure doesn’t matter, is a nifty trick. But the true secret sauce of Ambrosia is that it provides this virtual resiliency with high performance. It does so by applying decades-old wisdom that has been used to build performant, reliable, and available database systems. For instance, Ambrosia makes extensive use of batching, high-performance log writing, high-performance serialization concepts, and group commit strategies. It also includes mechanisms to properly handle non-determinism by logging any such events, as well. These carefully implemented techniques allow Ambrosia to deterministically provide virtual resiliency with little or no reduction in throughput, depending on message size, as compared to popular RPC frameworks that lack resilience mechanisms.
With a distributed system built on these abstractions, we end up with two coordinated instances of the Ambrosia runtime surrounding the running services Alice and Bob:
This basic architecture not only encapsulates many types of distributed-system failures, but it also allows for interesting variations, such as standing up multiple active instances of a service (so-called active/active configurations) so that we can quickly failover to “Bob 2” or “Alice 2” if one of the services dies or is slow to recover.
But such a failover might not reflect a networking or system failure at all.
Perhaps it is a matter of choice—the end user’s preference.
Maybe a user of the Alice service shuts off their desktop at the end of a hectic day and picks up their tablet instead. “Desktop Alice” halts, and “Tablet Alice” resumes where they left off. Instead of a network or hardware crash, it’s simply a failover to their preferred device.
This leads us to a key insight. Circling back to the Fleet system, where we started, we can now cast transitions of user activity from one device to another as a special case of failover. In this case, it’s a transition from one device to another.
But migration of program state to a new device is just one special case of mobility. If we have the right feedback and user interface mechanisms in place, we can generalize this as transitions of user activity from one place to another, in many senses of the word place. How best to do this is still an open problem. Our work explores some possibilities and hints at solutions. This suggests that cross-device and distributed systems will have major impact on user interfaces going forward, even if the full vista of interactive systems and human experiences this makes possible has only just begun to dawn.
Next, we take this up a level by looking at a simple example of how sensing shifts in context—such as responding appropriately when a user tilts a display—can drive lightweight and natural transitions from one human activity to another.
Changes in display orientation as a nuanced transition in ‘place’
What tilt-responsive techniques for digital drawing boards do in brief:
- Run on a Microsoft Surface Studio 2 using a C# module for sampling the sensors and implementing signal conditioning and a JavaScript-based client
- Demonstrate how a variety of everyday applications can use sensed display adjustments to drive context-appropriate transitions, such as shifts between reading versus writing, displays of public versus personal information, face-to-face video versus screen sharing of documents in remote work, and other nuances of input and feedback contingent on display angle—with continuous interactive responses tailored to each use case.
During the long incubation and technical development of the distributed-systems advances discussed above, we kept circling back at odd intervals to another endeavor: we had outfitted the Microsoft Surface Studio with an extra sensor to detect its angle. The Microsoft Surface Studio is a 27” screen that supports multi-touch and pen input and can be adjusted smoothly from a vertical display to a drafting table–like 20 degrees. In this device, we saw a parallel between its use and people’s behaviors and expectations outside the digital world.
In everyday life, people naturally reposition objects, such as paper documents, to allow shared visibility, partial viewing, and even concealment. Such motions are completely natural and perhaps even subconscious. How we position an object depends on what we intend to do. For example, a doctor might hold a medical chart “close to the vest” at first, but then turn it toward their patient when ready to share particular results. Similarly, the appropriate display orientation depends on the task and situation at hand. A vertical monitor makes for easier reading but not necessarily easier writing with a stylus. Angled drafting tables in a design studio encourage sketching and freeform brainstorming, but the preference when presenting refined versions of those same ideas may be a vertical screen. Display angle is not one size fits all. We wondered, could we tap into our most natural ways of mediating information exchange by sensing the tilt of a display?
By adding an off-the-shelf tilt sensor to the Microsoft Surface Studio, we discovered a series of designs, techniques, and interactions that can respond appropriately to the user’s context of use, as sensed by the current display angle. In doing so, we begin to shift the burden of adapting the inputs, tools, modes, and graphical layout of applications from the user to the system. For example, one demonstration we built explores a teleconferencing scenario in which the typical talking-head video feed of “person-space” appears when the screen is vertical but transitions to a shared document that users can mark up with a digital pen when the screen is tilted down like a drafting board. As the display tilts, we fade out the camera feed to let the user avoid unbecoming video angles. This also selectively focuses the remote audience’s attention on the shared document rather than the video feed—in effect, a remote way of steering a remote participant’s attention in a manner analogous to angling a paper document toward a collaborator nearby.
As the demo reel above illustrates, displays can respond to tilt by transitioning between reading and writing, public versus personal, authoring versus presenting, and other nuances of input and feedback—and in ways that often can delight and entertain, as well.
Amid our above distributed-systems research, this curiosity-driven work was a completely unrelated side project. Or so we thought. After we finished writing up tilt-responsive techniques for publication, we had an epiphany. As you adjust the angle of a display, you’re:
In the same physical location.
On the same device.
Using the same screen.
Running the same application.
But the new screen orientation doesn’t afford the same tasks and activities—you’ve transitioned to a different place.
This subtly shifts your expectations of what is appropriate. And with just a tiny bit of awareness, well-designed software could provide a sort of intelligence by responding appropriately in kind. That is, the angle of a digital display is just another form of mobility. Here, the mobility is on the micro-level, moving from one screen orientation to another, as opposed to the more macro-level transitions that are the current focus of the Fleet system, such as moving from one device to another or across local and remote locations.
Closing thoughts
We have discussed how the Fleet system explores new ways to think about mobility, and we’ve shown how it builds on an exciting new distributed-systems technology known as Ambrosia. These technologies work to build and implement applications that not only go beyond the current device, but also unbind other dimensions of mobility—the current user, the current application, the current time—as well. Beyond that, as hinted at by our final example above, our research shows how sensors can bridge transitions from the current (sensed) context to another—by responding appropriately to natural human activity.
At the highest level, these advances hint at how devices can be better together—complementing one another across an ecosystem of technologies—instead of competing to add ever more complexity with each new device or service.