Insight and analysis on the data center space from industry thought leaders.
Tips for Simplifying Your Cloud Network
Any data center, cloud or otherwise, depends on its Ethernet network to allow servers, storage systems and other devices to talk to each other. No network means no data center, writes Brian Yoshinaka of Intel.
December 14, 2011
Brian Yoshinaka is a marketing programs manager in Intel’s LAN Access Division, where he works with customers and partners to promote Intel Ethernet solutions. You can learn more by reading Brian’s blog on new network technologies and products.
Brian_Yoshinaka_Intel
BRIAN YOSHINAKAIntel
“Ethernet is the backbone of the Cloud. “
Bold statement? Not at all. Any data center, cloud or otherwise, depends on its Ethernet network to allow servers, storage systems and other devices to talk to each other. No network means no data center.
Today, as IT departments prepare to deploy internal cloud environments, it's significant to evaluate how network infrastructure choices will impact the cloud’s ability to meet its service level agreements (SLAs). Terms commonly used to describe cloud-computing capabilities, such as agility; flexibility and scalability should absolutely apply to the underlying network as well.
With that in mind, let's take a look at some recommendations for simplifying a private cloud network. You can consider this post a Cliffs Notes' version of a white paper Intel completed recently.
Consolidate Ports and Cables
Most cloud environments are heavily virtualized. So virtualization has been a big driver of increasing server bandwidth needs. Today, it’s common to see virtualized servers sporting eight or more Gigabit Ethernet (GbE) ports. That, of course, means a lot of cabling, network adapters and switch ports. Consolidating the traffic of those GbE connections onto just a couple of 10 Gigabit Ethernet (10GbE) connections simplifies network connectivity, while lowering equipment costs, reducing the number of possible failure points and increasing the total amount of bandwidth available to the server.
Converge Data and Storage onto Ethernet Fabrics
10GbE’s support for storage technologies, such as iSCSI and Fibre Channel over Ethernet (FCoE), takes network consolidation a step further by converging storage traffic onto Ethernet. Doing so eliminates the need for storage-specific server adapters and infrastructure equipment. IT organizations can combine LAN and SAN traffic onto a single network or maintain a separate Ethernet-based storage network. Either way, they’ve made it easier and more cost-effective to connect servers to network storage systems, reduced equipment costs and increased network simplicity.
Maximize I/O Virtualization Performance and Flexibility
Once you have a 10GbE-unified network connecting your cloud resources, you need to make sure you’re using those big pipes effectively. Physical servers can host many virtual machines (VMs), and it’s important to make sure bandwidth is allocated and balanced properly between those VMs. There are different methods for dividing a 10GbE port into smaller, virtual pipes, but they’re not all created equal. Some methods of dividing allow these virtual functions to scale and use the available bandwidth of the 10GbE connection as needed, while others assign static bandwidth amounts per virtual function, limiting elasticity and leaving unused capacity in critical situations.
Enable a Solution That Works with Multiple Hypervisors
It’s likely that most cloud deployments will consist of hardware and software, including hypervisors from multiple vendors. Different hypervisors take different approaches to I/O virtualization—and it’s important that network solutions optimize I/O performance for those various software platforms; inconsistent throughput in a heterogeneous environment could result in bottlenecks that impact the delivery of services. You can avoid this and improve network performance across most major hypervisors by using Ethernet adapters that support both Machine Device Queues (VMDq) and support for Single Root I/O Virtualization (SR-IOV).
Utilize Quality of Service for Multi-Tenant Networking
Like a public cloud, a private cloud provides services to many different clients, ranging from internal business units or departments within the company to customers, and they all have performance expectations of the cloud. Quality of Service (QoS) helps ensure that clients’ requirements are met. Technologies are available that provide QoS on the network and within a physical server. The QoS between devices on the network is delivered by Data Center Bridging (DCB), a set of standards that defines how bandwidth is allocated to specific traffic classes and how those policies are enforced. For traffic between virtual machines in a server, QoS can be controlled in either hardware or software, depending on the hypervisor. When choosing a network adapter for your server, support for these types of QoS should be taken into consideration.
Again, keep in mind that these are high-level looks at the recommendations. For further information, the white paper I summarized goes into much greater detail on the how and why behind each recommendation.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like