Why Michael Dell is Smiling
VMware gives him the virtual data center
Pat Gelsinger thinks the days of the enterprise data center as we know it are numbered.
“Increasingly, companies want to get out of the job of building their own data centers, and operating their own data centers, providing a huge opportunity for service providers,” the CEO of VMware said from stage at VMworld 2016 in Las Vegas late last month, in one fell swoop declaring the principal focus of this very publication obsolete.
“Highly instrumented, modern, efficient, cloud-scale data centers,” is how he went on to describe service provider data centers of the future: bigger, more flexible, but fewer in number. “And 2016 is the crossover year when that becomes the dominant way that data centers are built and operated.”
Certainly the companies that actually build and operate data centers will have something to say about that last point. Any crossing over due to take place only has four months remaining to make its point. But Gelsinger — a man who previously had spent 30 years as point man for Intel’s strategy to build x86 architecture into the center of the enterprise — is now betting his career on the notion that most enterprise data centers will be defined digitally, not physically.
And betting the entire future of his own company to back Gelsinger’s notion is someone else whose life story centers on building x86 boxes into the core of business operations: Michael Dell. Last week, when the merger between Dell Inc. and VMware parent EMC Corp. finally closed, Mr. Dell became the chief of the company responsible for erasing all the gaps between the boxes whose manufacturing and delivery principles he himself pioneered.
So why isn’t Dell preparing himself, and the rest of the world, to refashion VMware under what we used to call “the Dell model?”
“The open ecosystem of VMware is absolutely critical to its success,” said Dell from the same VMworld stage. “So we’re only going to continue to encourage that. That hasn’t changed, and won’t change.”
It was the message that both technologists and investors attending the show wanted to hear most, even as measurable changes in the enterprise data center market place new stress and constraints upon VMware separately from Dell. While Dell Technologies will continue to be a private entity, as it has been since 2013, VMware will represent the only facet of the post-merger behemoth whose capital is tradable through common stock. VMware will be Dell's most sensitive component to changes in investors’ moods about the infrastructure market.
And here is where VMware has, of late, failed to prove itself. Up until the acquisition announcement, VMW was trading at about 40 percent of the value of its 2014 highs. The vSphere brand is perceived as declining, as organizations of all sizes move more of their workloads into the public cloud, where they end up being managed by the likes of Amazon and Microsoft.
What has to change is VMware’s perception of what its customers perceive the enterprise data center ecosystem to be. Put another way, it has to start seeing what its customers are seeing.
See also: Virtustream, VMware to Vie for Hybrid Cloud After Dell Reorg
The Variable Footprint
Infrastructure is not truly scalable infrastructure, in customers’ minds, if they can’t extend their workloads into public clouds. This little deficiency had grown so large that even investment analysts were noting it in their reports. So if there’s any one single takeaway from just about any five-minute sampling of VMworld 2016, it’s that vSphere is redefining NSX from a software component into a service.
The reformed NSX comes complete with its own Web portal, extending vSphere’s virtual infrastructure footprint into public cloud territory — Amazon Web Services, Microsoft Azure, Google Cloud Platform, IBM Cloud (VMware’s new preferred partner), and certainly vCloud Air.
“We’re taking existing on-prem products, like classic products we have today — and you can still install them on-premises, and many customers want it that way — and we enable them to manage workloads across clouds,” said Guido Appenzeller, VMware’s chief technology strategy officer, during a press conference at VMworld. VRealize Automation and NSX are being configured so that the same network policies that apply to workloads on-premises travel with those workloads as they’re migrated into public cloud space.
“Enterprises don’t need different, competing environments,” Gelsinger told one analyst during the conference. “They want more homogeneity, so they can do their innovations at higher levels. Having heterogeneity in their environment means having two stacks; two tools; different, fractured infrastructure. We’re seeing customers go exactly the opposite direction.”
More to the point, enterprise data center operators have been doing everything they can to maintain the sanctity of their application environments, as their data centers change around them.
Disruption or Security: Choose One
To drive the point home that VMware’s proverbial weather vane has been effectively recalibrated, the company presented reporters and analysts with two major NSX customers at the show. Both are in the financial services industry; neither utilize containerization to any degree. Together, they represent VMware’s ideal customer: an organization with plenty of resources, but also with a boatload of legacy assets, seeking a way to dictate the pace of its own expansion without rendering older assets obsolete too soon.
“Unlike a traditional, physical firewall model, you need to re-architect the underlying network,” declared Brandon Hahn, a solutions architect with Wisconsin-based West Bend Mutual Insurance. “And you need to know ahead of time what your final state needs to look like, because otherwise, you’re going to be bringing in additional devices, and it’s very expensive and time consuming to retrofit.”
Hahn was discussing the challenge of getting network virtualization into a financial services organization, and it’s a significant one. Management may, at some point, become reasonably convinced of the efficiencies the organization may gain from transitioning to a virtualized network infrastructure — for example in measurable efficiencies, shortened development cycles, and server utilization. Nevertheless, the initial argument and the value proposition for essentially gutting the foundation of the organization’s entire IT infrastructure and substituting a kind of nebulous construct are not tailor-made for the CIO level. The problem is easy enough to frame, but the solution still sounds like science fiction as far as enterprise data centers are concerned.
“With a technology like NSX, we can go in and say, we’re building a network security platform,” said Hahn, revealing the sugar coating that, at least in his case, ended up making the medicine digestible. (In, as VMware might say, a most delightful way.) “Then say, three years from now, we bring in a brand new application with a different architecture that we had not planned on. We can still apply a security model to that without re-architecting the underlying network — which is huge, especially with Docker, containers, Photon.
“We don’t know where we’re going to be from a development cycle [standpoint] in three years,” Hahn continued. “But we know that the framework that we’ve deployed, from a security perspective, allows us to secure that going forward.”
Brian Irwin, a technical program manager with Seattle-based Washington Federal, told journalists it was security that sold his firm on NSX as well — specifically, the capability it gives administrators to microsegment the network, subdividing it into separate nets.
Left to right: Brendan Hahn, Solutions Architect, West Bend Mutual Insurance; Dr. Rajiv Ramaswami, XVP/GM Network & Security Business Unit, VMware; Brian Irwan, Technical Program Manager, Washington Federal
“In a traditional model, you’re not going to inspect traffic within a Web tier, or within an app tier, or within a [database] tier. And for us, we were able to microsegment the tiers, so if you had a hacker get in at the Web tier on Web 1, they’re not necessarily going to be able to move laterally to Web 2, Web 3, Web 4. We’re just trying to make it as hard as possible for an incursion to happen.”
NSX enables any system of applications to perceive just the network it needs, and nothing more. It has the benefit of limiting any user’s access, including a malicious user, to the secluded portion of the network containing the application that permitted, or otherwise allowed, access in the first place. That side benefit has the virtue of making an easily digestible use case for upper management who might not otherwise understand the benefit, say, of projecting a pre-existing data warehouse as a persistent storage container for an application running under Docker.
The Bridge
The other hard-to-swallow value proposition is hyperconvergence, which NSX is also tied up with. It’s easy to declare hyperconvergence a hot technology. But within real-world, pre-existing data centers today, compute, storage, network fabric, and memory capacities are not being pooled together in homogenous streams.
“Look, there’s been tremendous growth in converged and hyperconverged, exactly for the reason that customers want that easy-to-adopt model,” Michael Dell said, speaking perhaps with a note of extended optimism. “This plays very much into what we see as big pockets of market demand.”
It’s a phrase we’ve heard from Dell and from his company before, including in the realm of virtualization. Four years ago, Dell acquired Wyse, with its terminals and its virtual desktop technologies, first with the intention of leveraging VMware technologies, then later leveraging Citrix XenDesktop. The Wyse partnership had hopes of making cloud-based applications available to thin client machines as a way of compensating for lower enterprise demand for PCs. It was Dell’s first play at delivering cloud-based workloads, and it’s fair to say in retrospect that it did not move the needle.
Now, the only needle in the new Dell Technologies hierarchy that is visible to the typical shareholder will belong to VMware. Yes, VMware has a strong customer base with its ESX and ESXi hypervisors. And yes, VMware’s base and Dell’s only overlap partway, giving the new Dell new prospects.
But the principal task in front of the new company is to build a bridge between the physical infrastructure in which Dell Inc. once excelled, and the virtual infrastructure which will become — for better or worse — the face of Dell Technologies. From an applications perspective, it’s a bridge between traditional workloads and “modern” workloads.
“The reality is that these two types of IT are not yet integrated,” IDC program director for software development research Al Hilwa wrote in a note to Data Center Knowledge, “and DevOps and continuous delivery workflows which predominate in one realm are still not typically practiced with traditional workloads.”
Hilwa believes VMware’s recent leveraging of security as an overall theme helps it to better associate its technology evolution with things the executive suite actually thinks its data center needs. For example, vSphere Integrated Containers is a means of enabling container infrastructures to co-exist with traditional, database-driven architectures. That’s a bit esoteric, but restating that theme as a security play makes it sell better.
“This may be just the right ticket for the traditional side of IT to dip its toes in the container world. It may even help the two sides of IT get closer together,” writes Hilwa. “What is important to assess is to what degree will VMware bring its customers to the promised land of digital transformation. The signs are good, but the company has to continue to invest to bridge the two sides of IT.”
Technology analyst Kurt Marko believes organizations may remain disinclined to invest in such bridge-building exercises, until this period of disruption that Mr. Dell and others believe we’re in produces some casualty tallies.
“The innovation gap won’t be apparent to many in traditional IT — or VMware execs if that’s all they listen to,” writes Marko for Diginomica, “until we have a chasm-crossing moment when much smaller, cloud- and API-native companies exploiting asymmetrical technological advantages begin regularly killing large incumbent business.”
VMware’s future — and, in turn, Dell’s — is now staked on the success of a piece of software whose intended purpose is to make data center resources look the same to various generations of applications. The challenge today is making its own customers recognize the need for it. When we asked VMware’s customers the extent to which containerization has already impacted their business, West Bend Mutual’s Hahn responded, “Let me know when you find a commercial, off-the-shelf application that’s containerized, and then we’ll start talking.”
About the Author
You May Also Like