HPE Offers Container Suite as a ‘Business Decision,’ Not a Compulsory Upgrade

Scott Fulton III, Contributor

April 12, 2017

6 Min Read
HPE Offers Container Suite as a ‘Business Decision,’ Not a Compulsory Upgrade

“We bit the bullet,” said HPE’s vice president for product marketing, Roy Ritthaler.  He was speaking candidly with Data Center Knowledge about his company’s release Wednesday of containerized versions of its IT Operations Management (ITOM) suite, and an architectural choice HPE made to adopt a scalable infrastructure for its operations software — an architecture Ritthaler admitted was spurred on by HPE’s customers.

Certainly there is a growing number of containerized software workloads among all data centers, and they do offer benefits to administrators with respect both to scalability and location flexibility, Ritthaler acknowledged.

“The management software has to be able to control, and deal with, that, to reduce cost and risk for our customers,” he said.  “But if you’re going to go do that, we looked at it very hard, and from an engineering perspective of the software tools themselves, you actually have to build in the same era of technology as the things you’re managing.”

Same Skills, Different Role

Ritthaler’s comments come in conjunction with HPE’s latest move to facilitate the methods that a growing number of data centers are using to stage workloads, without uprooting its existing base.  While containerization (the Docker wave) is indeed catching on, it’s not a groundswell.  Despite enterprises incorporating more cloud-based resources, their respective infrastructures are resembling one another less and less.  Already, no single IT management methodology is eligible for accommodating everyone.

But the livelihoods of IT personnel are typically guaranteed by certification.  HPE ITOM is a certified skill.  Docker Inc.’s program for certifying Enterprise Edition skills only began last month.  There’s less incentive for admins and IT managers to adopt new skills for which they won’t get credit, and which they can’t include on their résumés.  And institutions are less inclined to invest in new skills whose long-term value is not assured.

This is among the reasons why HPE has been working with Docker Inc. on a limited containerization strategy.

“It really is a customer choice, how they want to architect their administrative and maintenance domains,” HPE’s Ritthaler told us.

His company’s new architecture is being called container deployment foundation, for now with lower-case letters.  What he described as the four principal pillars of ITOM — Data Center Automation, and Hybrid Cloud Management, IT Service Management Automation, and Operations Bridge — will be made available optionally beginning Wednesday on this new foundation, with updates planned on a quarterly basis.

IT personnel who are already skilled with these four pillars, we’re told, won’t have to learn entirely new methodologies just because their data centers began using containerization.  What’s more, data centers may choose to maintain containers, virtual machines, Web services, and client/server applications in their own domains, with ITOM being able to marshal all of them.  That’s the situation which Ritthaler acknowledged many HPE customers will probably prefer.

“IT operations is a multi-faceted world,” he remarked, “where these new workloads are just a part of the overall management of things.  There’s huge cost advantages if you can go up-level by one or two levels, and look at things from a uniform administrative and management perspective.”

Developers Won’t Laugh Last

The move toward containerization has largely been driven by developers who desire the capability to build, test, and deploy applications on software-defined infrastructure.  In developers’ worlds, applications are orchestrated by distributed systems managers such as Kubernetes, Docker Swarm, and Mesosphere DC/OS.  This style of orchestration is critical to the requirements of distributed services, and provides most of the payoff for the entire move to Docker-style containers.

But although their product managers will publicly underscore the philosophical differences between workload orchestration and systems management, from the outset, orchestrators have been auditioned for the role of overall cluster manager.  More than once, we’ve heard that very little prevents an orchestrator platform from scheduling conventional virtual machine-based workloads alongside container-based.

This is one reason why VMware has found itself embracing containerization — twice, with both VMware Integrated Containers and Photon — instead of downplaying it.  VMware has hard-won territory to defend.  So, by that same logic, does HPE.

Enterprise customers, said HPE’s Ritthaler, “want to see new technologies that come in not to be at odds with what they’re already doing, but be part of the overall, end-to-end flows and management procedures that they’ve already put in place.”

For many of these customers, he pointed out, “orchestration” has already been defined by HPE’s existing Operations Orchestration (OO) product.  In their environments, he asserted, years and years of processes have already been worked out for automating workloads.  So the methodology for integrating new containerized workloads should not have to change what already works.

Who Orchestrates the Orchestrator?

Something may have to change anyway, though: the way we think about staging the new processes.  Ritthaler draws a verbal picture for us of separate environments for different management roles, enabling developers to orchestrate to their hearts’ content. . . with oversight.

“There’s an access environment, an orchestration environment, and a content environment that comes with our orchestration,” he explained.  “We have built up over the years this huge library of content and integrations, that we allow our customers and partners to access.  For many years, our tools have used that same orchestration engine [HPE OO] to achieve their integrations either between themselves in an end-to-end process, or downstream into other environments that need to be managed — linking with a competitive management stack or a complementary one like Microsoft or VMware.”

What Ritthaler is implying by all of this may not be self-evident:  If an overall orchestrator, by HPE’s definition, is capable of staging workloads from all classes in reasonably similar, if not identical, manners, then much of the pressure enterprises may be feeling to containerize existing workloads, diminishes greatly, if not disappears entirely.  Theoretically, you only need to containerize an old, monolithic application if your goal is to subject it to orchestration by something like Kubernetes, DC/OS, or Swarm.

Put another way, why bother putting the old workloads through the strainer if we can make the new workloads play nicely enough with them, just as they are?

“It becomes a business decision,” said Ritthaler, “not a forced march into a new technology.  And I think, in a lot of ways, it will actually help the adoption of containerization in the enterprise, because the mandate to have uniform coverage doesn’t exist.  You get to do that on a domain-by-domain, application-by-application, basis, based on business case.  And many business applications, while you may want to containerize them, if the cost advantages aren’t there, and you don’t have the budget or the priority or any mandate to do that, you don’t have to.”

HPE’s value proposition for containerized ITOM could be called the “conservative alternative” to a full-on container migration.  It relies upon organizations’ willingness to compartmentalize all of their workloads into separate, but interoperable, environments.  It’s not a new set of silos, the argument goes, if they’re all being managed the same way by the same people.  For many enterprises — especially the ones still suffering from “virtual stall” from the last migration — that argument may strike a chord.

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like