Research Collection – Re-Inventing Storage for the Cloud Era

Published

“One of the challenges for us as a company, and us as an industry, is that many of the technologies we rely on are beginning to get to the point where either they are at the end, or they’re starting to get to the point where you can see the end. Moore’s Law is a well-publicized one and we hit it some time ago. And that’s a great opportunity, because whenever you get that rollover, you get an opportunity to be able do things differently, to have new ways of doing things.”

– Ant Rowstron, Distinguished Engineer and Deputy Lab Director, Microsoft Research Cambridge

It is projected that around 125 zettabytes of data will be generated annually by 2024. Storing this data efficiently and cost-effectively will be a huge challenge. Growth in storage capabilities using SSDs, HDDs or magnetic tape has not kept up with the exponential growth in compute capacity, the surge in data being generated, or the novel economics and storage needs of cloud services.

Future demands from intelligent edge and Internet of Things (IoT) deployments, streaming audio, video, virtual and mixed reality, “digital twins” and use cases we haven’t yet predicted will generate lots of bits – but where will we keep them?

Microsoft research podcast

Abstracts: August 15, 2024

Advanced AI may make it easier for bad actors to deceive others online. A multidisciplinary research team is exploring one solution: a credential that allows people to show they’re not bots without sharing identifying information. Shrey Jain and Zoë Hitzig explain.

This requires more than incremental improvements – it demands disruptive innovation. For many years, Microsoft researchers and their collaborators have been exploring ways to make existing storage approaches more efficient and cost-effective, while also forging entirely new paths – including storing data in media such as glass, holograms and even DNA.

Diagram showing the cloud storage landscape
Research efforts in storage cover the spectrum from “hot” data, which is read and written frequently and requires very fast performance, to “cool” data, which is generally archival in nature and is retrieved infrequently (see diagram).

In this clip from the Microsoft Research Podcast, Ant Rowstron discusses how various storage media are reaching their useful limits in the datacenter. Listen to the full episode.

If you think of most of the technologies we use to store data today – things like flash, things like hard disk drives, things like tape – it’s true to say that they were all designed before the cloud existed. And in fact, they were all designed to work in multiple scenarios. You know, the flash chips have to work in your phone as well as in the datacenters. Or the hard disk drives have to work with desktop machines as well as in the datacenters. So, there’s compromises that have been made. And, in fact, many of those technologies, if you think of a sort of S-curve where you got time, and you’ve got the metric of interest, you get a new technology comes along like a hard disk drive, it starts off very slowly and then hits its sort of main speed and it’s sort of going nicely. It’s, you know, doubling every two years or three years if you’re a hard disk drive. And then you get to the point where it actually stops just, basically, exploiting the property they were exploiting, and it rolls over and becomes flat. And I think one of the challenges for us as a company, and us as an industry, is that many of the technologies we rely on are beginning to get to the point where either they are at the end, or they’re starting to get to the point where you can see the end. And of course, Moore’s Law is a well-publicized one and we hit it some time ago. And that’s a great opportunity, because whenever you get that rollover, you get an opportunity to be able do things differently, to have new ways of doing things.

Re-Imagining Storage

Researchers have taken a holistic approach to making storage more efficient and cost-effective, using the emergence of the cloud as an opportunity to completely re-think storage in an end-to-end fashion. They are co-designing new approaches across layers that are traditionally thought of as independent – blurring the lines between storage, memory and network. At the same time, they’re re-thinking storage from the media up – including storing data in media such as glass, holograms and even DNA.

This work extends back more than two decades: in 1999, Microsoft researchers began work on Farsite, a secure and scalable file system that logically functions as a centralized file server, but is physically distributed among a set of untrusted computers. This approach would utilize the unused storage and network resources of desktop computers to provide a service that is reliable, available and secure despite running on machines that are unreliable, often unavailable and of limited security. In 2007, researchers published a paper that explored the conditions under which such a system could be scalable, the software engineering environment used to build the system, and the lessons learned in its development.

At Microsoft Research Cambridge, researchers began exploring optimizing enterprise storage using off-the-shelf hardware in the late 2000s. They explored off-loading data from overloaded volumes to virtual stores to reduce power consumption and to better accommodate peak I/O request rates. This was an early example of storage virtualization and software-defined storage, ideas that are now widely used in the cloud. As solid-state drives (SSDs) became more commonplace in PCs, researchers considered their application in the datacenter – and concluded that they were not yet cost-effective for most workloads at current prices. In 2012, they analyzed potential applications of non-volatile main memory (NVRAM) and proposed whole-system persistence (WSP), an approach to database and key-value store recovery in which memory rather than disk is used to recover an application’s state when it fails – blurring the lines between memory and storage.

In 2011, researchers established the Software-Defined Storage Architectures project, which brought the idea of separating control flow from data from networking to storage, to provide predictable performance and reduced cost. IOFlow is a software-defined storage architecture that uses a logically centralized control plane to enable end-to-end policies and QoS guarantees, which required rethinking across the data center storage and network stack. This principle was extended to other cloud resources to create a virtual data center per tenant. In this 2017 article the researchers describe the advantages of intentionally blurring the lines between virtual storage and virtual networking.

Established in 2012, the FaRM project explored new approaches to using main memory for storing data with a distributed computing platform that exploits remote direct memory access (RDMA) communication to improve both latency and throughput by an order of magnitude compared to main memory systems that use TCP/IP. By providing both strong consistency and high performance – challenging the conventional wisdom that high performance required weak consistency – FaRM allowed developers to focus on program logic rather than handling consistency violations. Initial developer experience highlighted the need for strong consistency for aborted transactions as well as committed ones – and this was then achieved using loosely synchronized clocks.

At the same time, Project Pelican addressed the storage needs of “cold” or infrequently-accessed data.  Pelican is a rack-scale disk-based storage unit that trades latency for cost; using a unique data layout and IO scheduling scheme to constrain resources usage so that only 8% of its drives can spin concurrently. Pelican was an example of rack-scale co-design: rethinking the storage, compute, and networking hardware as well as the software stack at a rack scale to deliver value at cloud scale.

To further challenge traditional ways of thinking about the storage media and controller stack, researchers began to consider whether a general-purpose CPU was even necessary for many operations. To this end, Project Honeycomb tackles the challenges of building complex abstractions using FPGAs in CPU-free custom hardware, leaving CPU-based units to focus on control-plane operations.

Explore more


Optics for the Cloud

Microsoft Research Cambridge’s Optics for the Cloud program conducts research to advance and enable the adoption of optical technologies in the rapidly growing field of cloud computing. In addition to novel applications of optics to reinvent local and wide-area networking, this program also is exploring two new approaches to data storage – glass and holography.

Explore more


Holographic Storage

Holographic storage research testbed at Microsoft Research Cambridge
Holographic storage research testbed at Microsoft Research Cambridge

Holographic storage was first proposed back in the 1960s, shortly after the invention of the laser. Holographic optical storage systems store data by recording the interference between the wave-fronts of a modulated optical field, containing the data, and a reference optical field, as a refractive index variation inside the storage media. It is this information containing refractive index variation that is the “hologram”. The stored data can then be retrieved by diffracting only the reference field off the hologram to reconstruct the original optical field containing the data.

Capitalizing on recent exponential improvements and commoditization in optical technologies, Project HSD is a collaboration between Microsoft Research Cambridge and Microsoft Azure to re-vitalize and re-imagine holographic storage with a cloud-first design. Researchers in physics, optics, machine learning and storage systems are working together to design mechanical movement free, high-endurance cloud storage that is both performant and cost-effective. This has led to new research challenges and breakthroughs in areas ranging from materials to machine learning.

Explore more


Project Silica

Project Silica

Another project at Microsoft Research is exploring a different medium that can meet our future storage needs for long-lived “cold” data – glass. Glass is an incredibly durable material – it can withstand being boiled in water, microwaved, flooded, scoured and demagnetized.

Project Silica is developing the first-ever storage technology designed and built from the media up, using glass, for the cloud. Researchers are leveraging recent discoveries in ultrafast laser optics to store data in quartz glass by using femtosecond lasers, and building a completely new storage system designed from scratch around this technology. This opens up exciting opportunities to challenge and completely re-think traditional storage system design, and to co-design the future hardware and software infrastructure from the cloud.

As a proof of concept, Microsoft and Warner Bros. collaborated to successfully store and retrieve the entire 1978 “Superman” movie on a piece of glass roughly the size of a drink coaster. This project leverages advances first developed at the University of Southampton Optoelectonics Research Centre, and was featured in a Microsoft Ignite 2017 keynote (opens in new tab) on future storage technologies.

Explore more


DNA Storage

DNA Storage

In addition to optical methods, DNA is also an attractive possibility for long-term data storage. DNA is extremely dense – in practice, it could store up to 1 exabyte per cubic inch. It is very durable, and can last for hundreds or even thousands of years. And it is future-proof: humans will be interested in reading their DNA for as long as humans are around, so technologies to read and write DNA will surely endure. Starting in 2015, Microsoft and University of Washington researchers have been collaborating to use DNA as a high-density, durable and easy-to-manipulate storage medium. The DNA Storage project enables molecular-level data storage into DNA molecules by leveraging biotechnology advances in synthesizing, manipulating and sequencing DNA to develop archival storage.

DNA storage: image of a pencil tip and a test tube tip

In this clip from the Microsoft Research Podcast, Karin Strauss discusses several properties of DNA that make it a promising medium for data storage. Listen to the full episode.

We’re very excited about DNA for at least three of its properties. The first one is density. So instead of really storing the bits into devices that we have to manufacture, we are really looking at a molecule, storing data in a molecule itself. And so, a molecule can be a lot smaller than the devices we’re making. Just to give you an example, you could store the information, today stored in a datacenter, one exabyte of data, into a cubic inch of DNA. So that’s quite tiny. Durability is the next interesting property of DNA. And so, DNA, if preserved under the right conditions, can keep for a very long time, which is not necessarily possible with media that’s commercial today. I think the longest commercial media is rated for thirty years. That’s tape. Still tape! DNA, if encapsulated in the right conditions, has been shown to survive thousands of years. And so, it’s very interesting from a data preservation perspective as well. And then, one other property is that, now that we know how to read DNA and we’ll always have the technology to read it. So now we’ll have those readers… (if we don’t have those readers, we have a real problem) …but we’ll have those readers forever, as long as there is civilization. So, it’s not like floppy disks that are in the back of a drawer just gathering dust. We’ll really have technology to read it.

While DNA storage at scale is not yet practical, due to the current state of DNA synthesis and sequencing, these technologies are improving quite rapidly with advances in the biotech industry. At the same time, the impending limits of silicon technology as Moore’s Law comes to an end create an incentive to further explore the use of biotechnology for computation as well as storage. For instance, DNA-based databases were proposed decades ago; more recent evidence of their practicality has renewed interest in related theory and applications. In this area, Microsoft researchers have designed a DNA-based digital data store equipped with a mechanism for content-based similarity search (see paper below).

Explore more

Continue reading

See all blog posts