Data Center SDN: Comparing VMware NSX, Cisco ACI, and Open SDN Options
What to keep in mind when evaluating SDN options for the ever important networking layer
The data center network layer is the engine that manages some of the most important business data points you have. Applications, users, specific services, and even entire business segments are all tied to network capabilities and delivery architectures. And with all the growth around cloud, virtualization, and the digital workspace, the network layer has become even more important.
Most of all, we’re seeing more intelligence and integration taking place at the network layer. The biggest evolution in networking includes integration with other services, the integration of cloud, and network virtualization. Let’s pause there and take a brief look at that last concept.
Software-defined networking, or the abstraction of the control and data plane, gives administrators a completely new way to manage critical networking resources. For a more in-depth explanation of SDN, see one of my recent Data Center Knowledge articles.
There are big business initiatives supporting the technology. Very recently, IDC said that the worldwide SDN market, comprising physical network infrastructure, virtualization/control software, SDN applications (including network and security services), and professional services, will have a compound annual growth rate of 53.9% from 2014 to 2020 and will be worth nearly $12.5 billion in 2020.
As IDC points out, although SDN initially found favor in hyperscale data centers or large-scale cloud service providers, it is winning adoption in a growing number of enterprise data centers across a broad range of vertical markets, especially for public and private cloud rollouts.
"Large enterprises are now realizing the value of SDN in the data center, but ultimately, they will also recognize its applicability across the WAN to branch offices and to the campus network," said Rohit Mehra, VP, Network Infrastructure, at IDC.
“While networking hardware will continue to hold a prominent place in network infrastructure, SDN is indicative of a long-term value migration from hardware to software in the networking industry. For vendors, this will portend a shift to software- and service-based business models, and for enterprise customers, it will mean a move toward a more collaborative approach to IT and a more business-oriented understanding of how the network enables application delivery," said Brad Casemore, Director of Research for Data Center Networking at IDC.
There are several vendors offering a variety of flavors of SDN and network virtualization, so how are they different? Are some more open than others? Here's a look at some of the key players in this space.
VMware NSX. VMware already virtualizes your servers, so why not virtualize the network too? NSX integrates security, management, functionality, VM control, and a host of other network functions directly into your hypervisor. From there, you can create an entire networking architecture from your hypervisor. This includes L2, L3, and even L4-7 networking services. You can even create full distributed logical architectures spanning L2-L7 services. These services can then be provisioned programmatically as VMs are deployed and as services are required within those VMs. The goal of NSX is to decouple the network from the underlying hardware and point completely optimized networking services to the VM. From there, micro-segmentation becomes a reality, increased application continuity, and even integration with more security services.
Use cases and limitations. The only way you can truly leverage NSX is if you’re running the VMware hypervisor. From there, you can control East-West routing, the automation of virtual networks, routing/bridging services for VMs, and other core networking functions. If you’re a VMware shop hosting a large number of VMs and are caught up in the complexities of virtual network management, you absolutely need to look at NSX. However, there are some limitations. First of all, your levels of automation are limited to virtual networks and virtual machines. There’s no automation for physical switches. Furthermore, some of the L4-L7 advanced network services are delivered through a closed API, and might require additional licensing. Ultimately, if you’re focused on virtualization and your infrastructure of choice revolves around VMware, NSX may be a great option. With that in mind, here are two more points to be aware of: If you have a super simple VMware deployment with little complexity, you’ll probably have little need for NSX. However, if you have a sizeable VM architecture with a lot of VMware networking management points, NSX can make your life a lot easier.
Big Switch Networks. Welcome to the realm of open SDN. These types of architectures provide for more options and even support white (brite) box solutions. Big Switch has a product called Big Cloud Fabric, which it built using open networking (white box or brite box) switches and SDN controller technology. Big Cloud Fabric is designed to meet the requirements of physical, virtual, cloud and/or containerized workloads. That last part is important. Big Switch is one of the first SDN vendors out there to specifically design networking services for containerized microservices. Here’s the other cool part: BCF supports multiple hypervisor environments, including VMware vSphere, Microsoft Hyper-V, KVM, and Citrix XenServer. Within a fabric, both virtualized servers and physical servers can be attached for complete workload flexibility. For cloud environments, BCF continues OpenStack support for Red Hat and Mirantis distributions. The other cool part is your ability to integrate it all with Dell Open Networking switches.
Use cases and limitations. Even though it will support other hypervisors, the biggest benefits come from the integration with VMware’s NSX. BCF interoperates with the NSX controller providing enhanced physical network visibility to VMware network administrators. Furthermore, you can leverage the full power of your white (brite) box switches and extend those services throughout your virtualization ecosystem and the cloud via OpenStack. That being said, it’s important to understand where this technology can and should be deployed. If you’re a service provider, cloud host, or a massively distributed organization with complex networks, working with a new kind of open SDN technology could make sense. First of all, you can invest and have confidence around commodity switches since the software controlling it is powerful. Secondly, you’re not locked down by any vendor, and your entire networking control layer is extremely agile. However, it won't be a perfect fit for everybody. Arguably, you can create a “one throat to choke” architecture here; but it won’t be quiet as clean as buying from a single networking vendor. You are potentially trading off open vs proprietary technologies, but you need to ask yourself: “What’s best for my business and for my network?” If you’re an organization focused on growth, your business, and your users, and you simply don’t have time or want to work with open SDN technologies, this may not be the platform for you. There will be a bit of a learning curve as you step away from traditional networking solutions.
Cumulus Linux. This has been an amazing technology to follow and watch gain traction. (Please note that there are many SDN vendors creating next-generation networking capabilities built around open and proprietary technologies. Cumulus Linux is included here as an example and to show just how far SDN systems have come.) The architecture is built around native Linux networking, giving you the full range of networking and software capabilities available in Debian, but supercharged ... of course! Switches running Cumulus Linux provide standard networking functions such as bridging, routing, VLANs, MLAGs, IPv4/IPv6, OSPF/BGP, access control, VRF, and VxLAN overlays. But here’s the cool part: Cumulus can run on “bare-metal” network hardware from vendors like Quanta, Accton, and Agema. Customers can purchase hardware at cost far lower than incumbents. Furthermore, hardware running Cumulus Linux can run right alongside existing systems, because it uses industry standard switching and routing protocols. Hardware vendors like Quanta are now making a direct impact around the commodity hardware conversation. Why? They can provide vanity-free servers with networking options capable of supporting a much more commoditized data center architecture.
Use-cases and limitations. Today, the technology supports Dell, Mellanox, Penguin, Supermicro, EdgeCore, and even some Hewlett Packard Enterprise switches. Acting as an integration point or overlay, Cumulus gives organizations the ability to work with a powerful Linux-driven SDN architecture. There are a lot of places where this technology can make sense. Integration into heavily virtualized systems (VMware), expansion into cloud environments (direct integration with OpenStack), controlling big data (zero-touch networking provisioning for Hadoop environments), and a lot more. However, you absolutely need to be ready to take on this type of architecture. Get your support in order, make sure you have partners and professionals who can help you out, and ensure your business is ready to go this route. Although there are some deployments of Cumulus in the market, enterprises aren’t ripping out their current networking infrastructure to go completely open-source and commodity. However, there is traction with more Linux workloads being deployed, more cloud services being utilized, and more open sources technologies being implemented.
Cisco Application Centric Infrastructure (ACI). At a very high-level, ACI creates tight integration between physical and virtual elements. It uses a common policy-based operating model across ACI-ready network and security elements. Centralized management is done by the Cisco application policy infrastructure controller, or APIC. It exposes a northbound API through XML and JSON and provides a command-line interface and GUI that use this API to manage the fabric. From there, network policies and logical topologies, which traditionally have dictated application design, are instead applied based on the application needs.
Use-cases and limitations. This is a truly powerful model capable of abstracting the networking layer and integrating core services with your important applications and resources. With this kind of architecture, you can create full automation of all virtual and physical network parameters through a single API. Furthermore, you can integrate with legacy workloads and networks to control that traffic as well. And yes, you can even connect non-Cisco physical switches to get information, on the actual device and what it’s working with. Furthermore, partnerships with other vendors allow for complete integrations. That said, there are some limitations. Obviously, the only way to get the full benefits from Cisco’s SDN solution is by working with the (sometimes not entirely inexpensive) Nexus line of switches. Furthermore, more functionality is enabled if you’re running the entire Cisco fabric in your data center. For some organizations, this can get expensive. However, if you’re leveraging Cisco technologies already and haven’t looked into ACI and the APIC architecture, you should.
See also: Why Cisco is Warming to Non-ACI Data Center SDN
As I mentioned earlier, there are a lot of other SDN vendors that I didn’t get the chance to discuss. Specifically:
Plexxi
Pica8
PLUMgrid
Embrane
Pluribus Networks
Anuta
And several others…
It’s clear that SDN is growing in importance as organizations continue to work with expanding networks and increasing complexity. The bottom line is this: There are evolving market trends and technologies that can deliver SDN and fit with your specific use case. It might simply make sense for you to work with more proprietary technologies when designing your solution. In other cases, deploying open SDN systems helps further your business and your use-cases. Whichever way you go, always design around support your business and the user experience. Remember, all of these technologies are here to simplify your network, not make it more complex.
About the Author
You May Also Like