Insight and analysis on the data center space from industry thought leaders.
Application Delivery in a Software-Defined Data Center
The data center of the future promises us a more efficient, responsive, and streamlined model for delivering enterprise IT, writes Robert Haynes of F5 Networks. However, we must still solve familiar issues, such as how to best supply application services.
February 16, 2015
Robert Haynes has been in the IT industry for over 20 years and is a member of the F5 Marketing Architecture team where he spends his time taking technology products and turning them into business solutions. Follow Robert Haynes on Twitter @TekBob.
Plus ça change, plus c'est la même chose, or in more common IT parlance—same stuff, different day. Many of us know this sentiment well. As disruptive technologies and trends transform how we do business, at the heart of it all, we’re still grappling with the same core issues—driving better performance, ensuring security, and managing costs.
As IT, and IT delivery, continues to evolve, the software-defined data center represents the next major advancement in app delivery. This data center of the future promises us a more efficient, responsive, and streamlined model for delivering enterprise IT. At the same time, we must still solve familiar issues, such as how to best supply application services.
Bigger, Stronger, Faster Applications
Application services are functions delivered in the data path between end users and application that make applications more secure, faster, or available. Firewalling, load balancing, authentication, and encryption, for instance, will be ongoing requirements, but now must be aligned to this on-demand, highly orchestrated design.
The starting point for all design discussions, whether technology or topology, are the requirements. While there are a range of application service functions to deliver, there are three universal requirements to implement:
A comprehensive API that lays the foundation for integrating with the orchestration tools that are necessary for creating the on-demand infrastructure.
On-demand creation of new services using resources that are delivered from a common pool or platform to avoid delays in acquisition or provisioning.
Ubiquitous deployment of application services that may require compatibility with networking overlays, like VXLAN or NVGRE, or the ability to work across multiple virtualization platforms or public cloud offerings.
With these requirements in mind, there are a few preferred modes of app service delivery in the software-defined data center: specialized hardware devices, virtual machines, and a virtualization platform.
Three Preferred Modes of Delivery
For decades, specialized hardware devices—firewalls, VPN concentrators, and application delivery controllers—have been the first choice for mission-critical production environments, because they deliver specialized processing hardware, high availability, and high capacity. Can they be integrated successfully into the software-defined data center? The answer, as ever, is that it depends. If the hardware platform can be API driven, scale seamlessly with security controls, and can connect into the data center fabric (including support of overlay and tunneling protocols), then the answer is yes. This design makes the application services a function of the infrastructure rather than a specific entity within the application stack. As a result, you can greatly simplify and standardize the delivery of services and help combat virtual machine sprawl.
Like most things, however, this approach does have its drawbacks. Hardware devices might be efficient at scale, but they inevitably concentrate services into a small number of physical locations. This is likely to result in significant “tromboning,” which happens when application traffic must leave a physical host to be serviced by a separate device before it returns and then reaches the next virtual machine in the application stack. This is particularly significant for services such as east-west firewalls, which can generate a lot of extra network traffic.
Focusing on the virtual space, specifically virtual machines, services can be deployed where and when they are required, and can be placed close to the application servers to potentially remove additional network hops. While, conceptually, virtual devices fit in well with the software-defined data center, the need for orchestration now extends beyond creating services and must now encompass the creation, licensing, and deletion of the devices themselves.
Additional integration with the chosen server virtualization platform is required, and organizations need to be sure that the licensing models offer the flexibility required to meet the demands of a more dynamic data center. In addition, it’s important to check that virtual versions from key vendors are available across a wide range of server virtualization and cloud platforms, especially if a hybrid data center model is a future goal.
Virtualization platforms are also a very attractive way to deliver application services since many of them now include these services as part of their core functionality. Additionally, since these services are controlled and orchestrated by the core virtualization technology, they are usually bundled into the platform costs. Integrated into the hypervisor kernel, these services are embedded and available to all virtual machines. They are applied to the traffic as it passes across the hypervisor, and often no additional visible network hops are created.
Again, there are some drawbacks to this approach. In general, the range of functionality that’s embedded in virtualization platforms is far smaller than with the other options. Where organizations have benefitted from advanced functions and programmability often offered by third party suppliers, integrated solutions can feel decidedly limited. Additionally, a virtualization platform often creates a degree of vendor lock-in given that configurations are not easily portable between different platforms or into a different supplier’s public cloud.
The Model That’s Right for You
So how do you know which model is right for your organization? Again, it’s the same decision process we’ve faced time and again. First, understand your current and (as much as possible) future needs before assessing the benefits and drawbacks of each model. Work with your key vendors to get a realistic picture of how their solution will work for you, then test and pilot as much as is feasible. After all, the great benefit of the software-defined data center is the ability to rapidly deploy, test, and destroy.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like