Hewlett Packard Enterprise Rethinks Enterprise Computing
Unveils new vision of composable infrastructure, cooked up as Project Synergy
Fighting to hold on to a leading position in the data center of the future, Hewlett Packard Enterprise today unveiled a vision of enterprise computing infrastructure that is very different from the world of computing of past several decades, where the company earned its current dominance.
The vision is “composable infrastructure.” Devised under the codename “Project Synergy,” it is infrastructure that quickly and easily reconfigures itself based on each application’s needs, designed for the new world where companies even in the most traditional of industries, from banking to agriculture, constantly churn out software products and generally look to software as a way to stand out among peers.
HP, the company HPE was part of until the beginning of last month, and other big hardware vendors that have dominated the data center market for decades, companies like Dell, IBM, and Cisco, have all struggled to maintain growth in a world where not only developers but also big traditional enterprise customers are deploying more and more applications in the public cloud, using services by the likes of Amazon and Microsoft.
Enterprises increasingly look at cloud infrastructure services as a way to reduce the amount of data center capacity they own and operate, painting a future vision of enterprise computing where the dominant hardware suppliers of today have a much smaller role to play.
Fluid Resource Pools
At least for the foreseeable future, however, companies will not be ready to move all of their critical data and applications to the cloud. Previously existing and new applications they choose to keep in-house are what HPE hopes will run on its new enterprise computing infrastructure.
It is both hardware and a software stack that manages and orchestrates it. It breaks up compute, storage, and networking into individual modules, all sitting in what the company calls “frames.” Each frame is a chassis that can hold any mix of compute or storage modules a customer desires, plus a networking device that interconnects resources inside the frame and resources in other frames. Any interconnection setup is possible.
The idea is to create virtual pools of compute, storage, or networking resources, regardless of which chassis the physical resources are sitting in, and to provision just the right amount of each type of resource for every application almost on the fly to support the accelerating software release cycle many enterprises now have.
Not a New Idea
While radically different from the traditional data center environment, where each resource is often overprovisioned, just in case demand rises, or where some resources, such as compute, for example, are overprovisioned, while others aren’t, the idea isn’t new.
Facebook and Intel introduced the idea of the “disaggregated rack,” or, in Intel’s parlance, Rack Scale Architecture, in 2013. One purpose was to provision the right amount of resources for every application; another was to enable Facebook data center managers to upgrade individual server components, such as CPUs, hard drives, or memory cards, individually, without having to replace entire pizza-box servers.
Using software to create virtual pools of resources out of disparate physical resources that can sit in different parts of the data center also isn’t a new concept. Open source software called Mesos, for example, creates virtual pools of resources using existing hardware in the data center. Mesosphere, a startup that built a commercial product based on Mesos, sells what it calls a Data Center Operating System, which essentially presents all resources in the data center to the applications as a single computer.
Unified API for Faster Automation
A key element of HPE’s composable infrastructure is an open unified API that DevOps staff can use to write infrastructure automation code. It replaces multiple APIs they usually have to program for separately in a more traditional environment.
In one example, Paul Durzan, a VP of product management at HPE, listed nine APIs DevOps usually have to code for to automate the way applications use infrastructure. They included, among others, APIs to update firmware and drivers, select BIOS settings, set unique identifiers, install OS, configure storage arrays, and configure network connectivity.
DevOps staff, who are usually the ones programing this, aren’t always familiar with the physical infrastructure in the data center, so they have to communicate with the infrastructure team, which prolongs the process further, Durzan said, adding that it can take up to 120 hours to write automation code to all the APIs.
HPE’s single-API alternative, the company claims, enables automation with a single line of code that invokes a pre-defined template. The infrastructure admins control the templates that can be used by DevOps tools, such as Chef, Puppet, or Docker.
According to HPE, that single line of code may look something like this:
New-HPOVProfile -name$name, -baseline$base, -sanStorage$san, server$server
It is “one API that can reach down and program your whole infrastructure,” Durzan said. The API is powered by HP OneView, the company’s infrastructure management platform that has been around for about two years.
One template, for example, could be for a SQL database running on bare-metal servers using flash storage; another could be a cluster of servers virtualized using hypervisors with flash storage; there could also be a unified communications template for Microsoft’s Skype for Business.
‘Trying the Right Things’
While HPE’s composable-infrastructure ideas aren’t new, the company’s scale, existing customer relationships, and breadth of its services organization are substantial advantages. As the superstar Silicon Valley venture capitalist Vinod Khosla recently pointed out at the Structure conference in San Francisco, IBM, Dell, HP, and Cisco are all “trying the right things,” even though they haven’t come up with new, truly innovative ideas in decades.
HPE may also potentially be in a better position to compete in the data center market as a smaller and nimbler company than it was until it was separated from the consumer and printing segment. Its first results post-split, announced last week, illustrated that HPE has much better growth prospects than the other of HP’s two daughter cells.
About the Author
You May Also Like