Microsoft Pitches New Standard for Data Center Flash Storage

Says current architecture not the best fit for cloud infrastructure

Yevgeniy Sverdlik, Former Editor-in-Chief

March 22, 2018

4 Min Read
Kushagra Vaid, general manager and distinguished engineer, Azure Infrastructure, Microsoft, speaking at the OCP Summit 2018 in San Jose, California.
Kushagra Vaid, general manager and distinguished engineer, Azure Infrastructure, Microsoft, speaking at the OCP Summit 2018 in San Jose, California.Yevgeniy Sverdlik/Data Center Knowledge

Microsoft is proposing a new standard for the way flash devices for data center storage are designed and for the way they interact with host servers, saying the current architecture isn’t a good fit for cloud infrastructure models.

The proposed standard, which the company plans to contribute to the Open Compute Project in the form of a specification, seeks to separate lower-level flash hardware management functions from the functions that have to do with managing the stored data, leaving the former on the flash device, while pushing the latter to the host.

Moving the data management functions to the host, closer to the application workloads, will help storage better match specific application needs, Kushagra Vaid, a general manager and distinguished engineer on Microsoft’s Azure Infrastructure team, said. Disaggregating the two sets of functions will also help flash hardware and software advance independently of each other, while giving data center end users more consistency as they go through flash storage upgrades, he said.

As flash prices go down while applications need increasingly faster storage performance, the adoption of flash memory in data center storage arrays is growing. Enterprises and cloud providers like Microsoft Azure consume about 30 percent of the global flash output, Vaid said. About 60 percent of that consumption is by the biggest cloud platforms, most of them OCP members, according to him.

For Microsoft, an OCP member, purchasing flash memory “amounts to billions of dollars in annual spend,” he said. If the industry adopts the proposed standard, codenamed “Denali,” flash manufacturers will be able to build simpler and, importantly, cheaper devices.

Those devices would also be easier and quicker to deploy in cloud data centers, because there would be less variability from vendor to vendor and from generation to generation.

Microsoft released its first Cloud SSD specification through OCP several years ago. “That helped a little bit in getting all the parameters together,” Vaid said. Recently, however, “we started seeing … fragmentation in the industry, because new functionality was being built into SSD, but not in a consistent manner.”

Tech Born at a Startup

The basis for the standard is a memory controller designed by Cnex Labs, a San Jose, California-based semiconductor startup. (Microsoft Ventures led Cnex’s Series C funding round about a year ago.) Developed initially for all-flash arrays, the controller cam move functions seamlessly from host to controller, Cnex CEO, Alan Armstrong, explained.

Cnex started working on a proof of concept to demonstrate that the controller could be applied to cloud infrastructure two and a half years ago, Armstrong said. He joined Vaid on stage to present the proposed standard at this week’s OCP Summit, the OCP Foundation’s annual conference in San Jose.

The card, Armstrong said, will be deployed in data centers in production later this year. He did not say whose data centers they would be. Vaid did not say when Microsoft would be deploying the technology.

Today, Project Denali “ecosystem partners” also include Broadcom, Samsung, Intel, Marvell, Lite-On, and SK hynix. Having Samsung, Intel, and SK hynix's support should help the project get adopted as an actual standard. Those companies are among the top five NAND flash manufacturers, Samsung being the leader by market share. Getting support from Samsung's two largest competitors, Toshiba and Western Digital (owner of SanDisk), as well as another top player Micron would go a long way in achieving that goal as well.

The proposed standard has the potential to “redefine the interface between the server system and the flash,” Andy Bechtolsheim, co-founder of Sun Microsystems and founder of Arista Networks, said. It will reduce time-to-market, improve performance, and shrink latency, he said.

A core component of Denali is the interface between lower-level functions that stay on the flash device and the higher-level ones that move up the stack. The interface is called pBLK.

The lower-level functions are management of bad blocks, media management, and power failure. The upper-level ones, which are currently performed by flash devices themselves, are address mapping, garbage collection, and wear leveling.

microsoft_20project_20denali_20chart.jpg

Microsoft envisions two models of deployment for Denali: one where the three higher-level functions run directly on the host system, and the other where they run on a dedicated System on Chip (SoC) or FPGA, together with accelerators. Microsoft’s cloud server motherboards, open sourced through OCP, support FPGAs.

Project Denali technical details here

A Key OCP Design Principle

Disaggregation is a crucial design principle across some of the most consequential OCP projects. Disaggregating components of switches and servers, for example, means OCP hardware can be reconfigured based on current application needs and each individual component can be upgraded on its own schedule. Disaggregating network management software from switch hardware makes the network more resilient and easier to automate, while making it easier to use different kinds of software on the same boxes.

Facebook’s latest in-region data center interconnection solution, also announced at this week’s summit, employs the disaggregation concept as well. The solution allows the company to scale bandwidth in the layer that interconnects multiple data center network fabrics in a single region independently from the layer that connects the region to Facebook’s network backbone, which carries traffic from region to region.

About the Author

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like