Insight and analysis on the data center space from industry thought leaders.
Whatever Happened to High Availability?
High availability has been lost in the din about cloud computing, but high availability is still a key part of the IT narrative, whether you hear about it or not, writes Kai Dupke of SUSE, LLC.
February 18, 2013
Kai Dupke is Sr. Product Manager, SUSE LLC, a pioneer in open source software and enterprise Linux.
Kai-Dupke-tn
KAI DUPKESUSE
You don't hear a lot of about high availability (HA) these days, what with all the media attention focused on cloud computing. Five years ago, high availability and clustering was a big part of the IT conversation. These days, not so much. But high availability is still a key part of the IT narrative, whether you hear about it or not.
High availability has been lost in the din about cloud computing, because high availability has not been an expectation of the cloud computing story. IT shops looking at cloud computing are seeking the benefits of agility and lower cost instead.
In the past, application development on the UNIX and Linux platforms traditionally took the stance that the infrastructure would shoulder most, if not all, of the high availability (HA) responsibilities. The storage layer would include RAID arrays, the networking layer multiple network configurations, and the operating system would include HA features that would ensure maximum uptime for the application.
There is some HA workload at the application layer, of course: support for clustering is one way application developers have been able to incorporate HA features.
High Availability Still in Infrastructure Layers
Even as enterprise customers move to a more virtualized infrastructure, such as private clouds or virtual data centers, HA is still very much centered at the infrastructure layers, not at the application layer. There may be some HA support at the virtual layers, naturally, but that's still part of the infrastructure narrative.
Listening to the public cloud story, however, you get a much different tale. In public clouds, the expectation of the infrastructure layer is not as high as the older legacy systems. It's more of a commodity, get-what-you-pay-for mentality when it comes to the infrastructure, so the application developers have to take the only path open to them: build HA functionality into their applications.
This is not to tear down the public cloud; the flexibility and cost structure of the public cloud is part of why it can work for so many organizations. Plus, there is the very real logistical challenge of trying to apply HA principles to a public cloud. As Japan learned to its dismay in 2011, the capability to support public clouds en masse is impossible with current technology.
Cloud Doesn't Work For Everything
But HA is still a necessary part of IT, because not every IT department needs all of its services out in the cloud.
First, there's the very real migration costs to the cloud. Because clouds today do not provide HA, customers are asked to rewrite their applications. Because cloud misses a crucial feature, customers have to take action and spend money to do something the infrastructure should be do anyway.
It's been cool to watch marketing departments turn this additional workload into a benefit. It's like selling a car without a steering wheel. "Bring your own wheel to make sure no one uses your car because there is no wheel installed," is how some companies are selling the cloud.
The fact is, the biggest inhibitor for cloud computing is the lack of infrastructure support needed for many business-related applications. This is complicated by the fact that most of these apps are third-party applications and not even built by the companies using the applications.
In order to obtain the benefits of HA in the cloud, you could argue that these third-party companies should open the setup of and access to their applications? Sounds good on paper, but that means that every third-party vendor will end up creating their own way of doing this, multiplying the effort of getting HA at the application level.
What's the answer?
Embedded HA - HA at the operating system level - scales much better.
There's a reason why I only hear discussions about HA for private clouds: Because customers don't trust external service providers, needing not just HA but iron-clad service-level agreements as well.
Changes to the Cloud
One answer to this problem is simply make the cloud application-aware, essentially bringing HA to the cloud layer. How would this happen?
You could make the distributable workloads distributed, and for monolithic workloads (and nearly every application has a monolithic data source at some point), provide HA either in the guest operating system (which means to use the traditional setup but in the cloud) or apply HA to the cloud infrastructure.
As a product manager, I obviously have a solution for this, on three levels:
Provide HA in the guest operating system.
Provide remote monitoring and management of guests, without the need to install software in every guest.
Provide infrastructure HA on the cloud level.
In the cloud, when workloads get moved and not redeveloped, they will run fine until the cloud has an issue, and customers will then start to request HA. Or customers will run the cloud as backup for the data center, getting availability at lower systems cost, but higher cost of operations.
Elastic Cloud is Not Assured
These latter customers will look into automatization of their systems or figure out the elastics of the cloud as backup is not a given. Ask the Japanese companies who have tried to activate their cloud backups all at once, while the cloud was suffering downtime at the same time. All of this means customers will need a structural solution and those rigid SLAs.
However, getting HA actually in the cloud will bring HA to every application running on the cloud infrastructure, at a modest cost. Such a deployment would be easy to handle, without the need of special setups and special consultants *plus* giving customers the flexibility of the cloud and the option to think about clusters and workload optimization.
All Systems Impacted
It's not just mission-critical systems we're talking about. With HA capabilities available right inside enterprise Linux platforms, there's a very real cost-benefit to using those native HA features rather than tacking on an expensive, proprietary HA tool-set. By abstracting the cost out of the equation, we're seeing HA being applied at servers out on the edges, like file, mail, and print servers.
That may seem like overkill, but when data and information flow is a critical part of any business process these days, who wants to lose time and money when the e-mail system crashes? The days of tolerating unavailability are moving behind us.
The real story is that no server is expendable anymore. Business and IT are so closely intertwined, no failure is acceptable now. HA may not be loud and noisy like the hype about cloud computing, but it is still as an important part of the IT narrative than ever.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. Would you like to contribute? See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like