Why Isn’t Hyperconvergence Converging?
There’s an intriguing new partnership between HPE and one of the pioneers in software-defined systems. No, not with SimpliVity.
“Composable infrastructure, I think, is like a Rubik’s Cube,” Chris Cosgrave, chief technologist at Hewlett Packard Enterprise, said. He was introducing his company’s strategy for hyperconvergence before an audience of IT admins, DevOps, and CIOs at the Discover conference in London back in December 2015.
“You twizzle around any combination of storage, compute, and fabric to support the particular needs that you require there. Complexity is driven by the physical infrastructure. . . If you look at a lot of the software stacks you get — for instance, virtualization — they don’t really help you in terms of the underlying physical infrastructure, firmware updates, compliance, etc. This is what we’ve got to tackle here. We’re going to have a single interface, a software-defined environment, so you abstract all of that complexity away.”
He described how until this point in time IT departments built their companies’ data center infrastructure through hardware procurements and data center deals. And they typically overprovisioned.
The whole point of a software-defined data center, he went on, is to introduce resource automation based on the need for those resources at present, not some far-off future date. If efficiency is the goal, why provision more resources than your applications actually require? Moreover, why make a big pool of hardware just so you can subdivide it into clusters that are then sub-subdivided into segments? Indeed, HPE consulted with Docker Inc. with the intent of treating containerized workloads and virtual machine-based workloads equivalently.
This was HPE’s hyperconvergence stance in the beginning.
“This Idea of a Federation”
In February 2017, HPE completed its acquisition of SimpliVity, a company whose publicly stated objective five years earlier for hyperconverged infrastructure was the assimilation of the data center as we know it. With VMware already having been folded into Dell Technologies, and with Cisco making gains in marrying SDN (a technology Cisco once shunned) with servers (another technology Cisco once shunned) with analytics (rinse and repeat), HPE was perceived as needing market parity.
In a webinar for customers just days after the acquisition, Jesse St. Laurent, SimpliVity’s vice president for product strategy, described how servers such as its existing Hyper Converged 250 (based on HPE’s ProLiant 250) saved some customers nine-digit sums over a five-year period on their estimated budgets for hardware procurements.
“The internals that make this possible are this idea of a federation,” explained St. Laurent. “You simplify the management experience, but customers are managing more and more data centers. It’s not just a single-point location; we see, more and more, multiple sites. You have a cluster for a local site, a second cluster for [disaster recovery] or obviously, for very large customers, global networks.”
This idea of a federation, as SimpliVity perceives it, begins with the hyperconverged appliance but then extends to whatever resources lie outside the hyperconverged infrastructure sphere, including the public cloud. It doesn’t make much sense to completely automate the provisioning of some on-premise infrastructure resources without extending that automation to the other on-premise and all the off-premise resources as well.
Joining St. Laurent for this webinar was HPE’s director of hyperconverged infrastructure and software-defined storage, Bharath Vasudevan. In response to a question from the audience, Vasudevan was explaining why HPE would offer a single console for hyperconverged infrastructure management, separate from VMware’s vSphere. His argument was based around the idea that senior-level IT managers need to focus on broader administrative issues, so that more junior-level staff could focus on everyday tasks like provisioning virtual machines.
But then Vasudevan issued a warning about the way things tend to work in an IT department, and why resource provisioning typically gets elevated to a red-flag event:
“The way developers tend to request hardware and equipment doesn’t really jive well with existing IT processes, in terms of [service-level agreements],” he said. Thus a hyperconverged infrastructure console integrated with HPE OneView would be, as he explained it, “a way for IT to still maintain control of the environment, control of their [VM] images, security policies, data retention policies, all of that, but still allow self-service access. And here, it’s mitigation for IT of workloads getting sent out to the public cloud, because a lot of times, once that happens, it becomes increasingly difficult to repatriate those workloads.”
After a year-and-a-half of hyperconverged infrastructure HPE’s stance evolved a bit, at least from this vantage point. Not all infrastructure is the same, even when it’s abstracted by the software-defined model. And the infrastructure toward which developers typically gravitate may not be the preferred kind.
“Solving Different Problems”
Last February, HPE entered into a global reseller agreement with Mesosphere, the company that produces the commercial edition of a scheduling system based on the open source project, Apache Mesos. Now called DC/OS, this system first came to light in 2014, then quickly signed on reseller partners such as Cisco and big-name customers such as Verizon. HPE and Microsoft backed Mesosphere financially in 2016.
From the outset, DC/OS was touted as the harbinger for a new kind of software-defined infrastructure — one where the management process is much closer to the applications.
DC/OS includes a kind of orchestrator that makes Docker container-based workloads distributable across a broad cluster of servers. In the sense that it operates a server cluster as though it were a single machine and distributes and manages these workloads in such a way as to perpetuate the illusion, DC/OS truly is an operating system. It provisions resources from the workload’s underlying infrastructure to provide the best environment for these resources at present.
Sound familiar?
As part of the new agreement, HPE will pre-load DC/OS on some select models of ProLiant servers. Meaning, among customers’ choices may be a Hyper Converged 380 currently bearing SimpliVity’s branding (though not for long, says SimpliVity) or a ProLiant 380 with DC/OS ready to go.
Is this really a choice, one or the other?
“I think the short answer is, they’re somewhat solving different problems,” explained Edward Hsu, Mesosphere’s vice president of product marketing, in a conversation with Data Center Knowledge. “What DC/OS does with Apache Mesos is pool compute so that distributed systems can be pooled together.”
That actually doesn’t sound all that different from HPE’s definition of its SDDC vision from December 2015.
Hsu went on to say that his firm has been working with HPE to build plug-ins that could conceivably enable SimpliVity’s storage pools to be directly addressable as persistent storage volumes by containers running in DC/OS.
“But you know, right now, in the state of maturity of the technology,” he continued, “nothing pools storage the way Mesosphere DC/OS pools compute today. Put another way, a hyperconverged infrastructure and the storage systems that are a part of it are not completely, elastically pooled as one giant, contiguous volume yet.”
So some DC/OS customers, Hsu said, use hyperconverged infrastructure appliances to assemble their storage layers. But then to make them contiguous under a single compute pool they add DC/OS to make this storage addressable the way hyperconverged infrastructure originally promised, at least to container-based workloads.
Seriously? A convergence platform really needs a piece of open source software to complete its mission?
“A Packing Exercise”
“If there's anything certain about the software-defined movement, it's that blurry lines rule the day,” said Christian Perry, research manager for IT infrastructure at 451 Research, in a note to Data Center Knowledge.
Certainly, hyperconverged infrastructure is introducing enterprises to SDDC, especially those who have never seen it work before, he said. But to the extent that they “embrace” it, rather than incorporating compute and network fabric resources as HPE’s Cosgrave originally envisioned, they’re stopping at storage.
“I think platforms like Mesosphere's DC/OS get us much closer to what we might distinguish as a software-defined data center,” Perry continued. “There could be some overlap where this type of platform exists in an environment that is already hyperconverged, but the focus on the HCI side is primarily with storage, not the entire ecosystem. With this mind, hyperconverged probably would fit in nicely within a Mesosphere type of management environment.”
Literally, Perry is envisioning SimpliVity as a storage service slave for DC/OS. And he’s not alone.
“Strictly speaking, HCI (hyperconverged infrastructure) is merely a packing exercise,” said Kurt Marko, principal of Marko Insights, “in which mode nodes and associated server-side storage are crammed into a given rack-space unit. However, the assumption is that a high density of modestly-sized servers isn’t useful unless they are aggregated into some sort of distributed cluster that can pool virtual resources and allocate them as needed under the control of a master scheduler. That's where SDI comes in.”
Marko’s suggestion is that a management function on a hyperconverged infrastructure appliance may be too far away from the workloads to properly provision resources for them in real-time.
Originally, HCI vendors sought to build “cloud-lite” experiences for enterprises, he said, as they started adding centralized management consoles. But when they introduced storage virtualization (particularly Nutanix), that’s when the hyperconverged infrastructure concept started taking off in the market. Architecturally, it brought with it sophisticated features for cloud-like provisioning of compute and fabric. But even then, he says, HCI needs a complementary platform to round out its purpose in life.
“Expect other HCI vendors to integrate [Microsoft] Azure Stack and OpenStack into ‘insta-cloud’ products for organizations that, for whatever reason, want to operate their own scale-out cloud infrastructure,” said Marko. “These will be complemented by container infrastructure such as the Docker suite, Mesosphere, or Kubernetes for those moving away from monolithic VMs to containers.”
When did not converging become part of the plan for hyperconvergence? Eric Hanselman, chief analyst at 451 Research, told us the problem may lie with an enterprise’s different buying centers — the fact that different departments still make separate purchasing decisions to address exclusively their own needs.
“As a result, storage teams wind up being the gate,” Hanselman explained. “If server teams feel that they’re frustrated at being able to get started when they want, in the quantities that they need, guess what? You can buy HCI systems in a manner very similar to what you’re doing with servers today, and simply have all your own storage capacity there. One-and-done, and off you go.”
Hanselman describes a situation where enterprises purchase HCI servers for reasons having little or nothing to do with their main purpose: staging VM-based workloads on a combined, virtual platform. Meanwhile, development teams invest in platforms such as DC/OS or Tectonic (a commercial Kubernetes platform made by CoreOS and offered by Nutanix) to pool together compute resources for containerized workloads. Then, when one team needs a resource the other one has, maybe they converge, and maybe they don’t.
“The challenge, of course, from an organizational perspective, is that you now have new storage environments that sort of randomly show up, [each of which is] tied to whatever the project happened to be,” continued Hanselman. “So you’ve got an organizational management problem.”
Which may have been the impetus for the creation of hyperconverged infrastructure in the first place: the need to twizzle around the variables until the workloads are running efficiently. HPE’s Cosgrave argued that software stacks can’t help solve the problems with the underlying infrastructure. As it turns out, they may be the only things that can.
About the Author
You May Also Like