Insight and analysis on the data center space from industry thought leaders.
Solving the Data Center Paradox: Enabling Higher Performance at a Lower Cost
Mobile users expect to be able to access their required information at any time and from anywhere, which means that massive amounts of data must be analyzed in real-time, writes Motti Beck of Mellanox Technologies Inc.
March 24, 2015
Motti Beck is Director of Enterprise Market Development at Mellanox Technologies Inc. Follow Motti on Twitter: @MottiBeck.
For the last decade, virtualization technology has proven itself and has become the most effective way to increase data center efficiency. While initially most of the efforts were on developing server virtualization technologies, the recent focus has been on developing advanced interconnect technologies that enable more efficient data communication between servers and storage. This recent trend is also a direct outcome of the adaptation of innovative data center architectures, such as the Scale-Out , In-Memory compute, and Solid State Drive-based storage systems that heavily depend on the level of functionality and performance that the interconnect can deliver.
Scale-out architectures actually aren’t new. They appeared more than a decade ago in High Performance Computing (HPC). At that time, the industry realized that using distributed systems built with standard off-the-shelf servers provided much higher performance, enabling a reduction in Total Cost of Ownership (TCO). Later, scale-out architectures started to be used in data centers, first in distributed database systems and later in cloud infrastructures, where faster east-to-west traffic is required for better communication between the various virtual machines (VM).
On the storage side, the amount of data that must be processed in real-time and stored continues to grow, and using traditional SAN-based systems has become almost impossible due to the cost and to the exponential complexity associated with scaling such systems. Thus, as is the case on the compute side, scale-out storage architecture has proven itself as the right answer. Furthermore, in order to support mobile users and enable them to access the cloud in real-time, In-Memory processing technology has become more popular, allowing organizations to leverage the growing capacity and the lower cost of Solid State Drives (SSDs).
Innovative networking technology providers have developed interconnect products that address these emerging market trends and enable maximum efficiency of virtualized data centers at lower cost. Working closely with hyper-scale cloud and Web 2.0 partners over the last several years these solutions have delivered significant improvements in Return on Investment (ROI). Networking innovation has not only increased the data communication speed from 10GbE to 100GbE and reduced the data communication latency to a few hundred nanoseconds, but also added “offload” engines that execute complex functions directly on the input/output (IO) controller to minimize the Central Processing Unit (CPU) overhead, resulting in much higher CPU application availability and improving the overall system efficiency.
Also, vendors have recently introduced a programmable NIC (Network Interface Card) with an Application Acceleration Engine that is based on high performance IO controller, which provides maximum flexibility for users to bring their own customized protocols. All of these innovative technologies improve the reliability, data processing speed, and real-time response time, and lower the TCO of virtualized data center.
The Virtual Desktop Infrastructure (VDI) system is a good example that demonstrates the efficiency and cost savings that can be gained by solutions that use high performance end-to-end interconnects. VDI efficiency is measured by the maximum number of Virtual Desktop users that the infrastructure can support (more users will minimize the cost per user). A good example is the record number of Virtual Desktops per server using a 40GE NIC.
As previously mentioned, interconnect performance is just one dimension that continues to improve. The other dimension is enabling embedded engines to offload complex jobs from the expensive CPU to the IO controller, resulting in higher reliability, faster job completion, and real-time predictable response time, which is extremely important to mobile users. A good example of an offload is the Remote Direct Memory Access (RDMA) engine, which offloads communication tasks from the CPU to the IO controller. InfiniBand was the first protocol to run over RDMA, and later the industry enabled this capability to run over converged Ethernet as a standard called RoCE – RDMA over Converged Ethernet. RDMA has been also adopted by the storage industry with Microsoft’s SMB-Direct storage protocol as well as for iSCSI with a standard known as iSER (iSCSI extensions for RDMA).
The performance and efficiency gains are significant for hypervisors accessing storage over RoCE, outperforming traditional Ethernet communication. One of the best examples is Fluid Cache for SAN, which cuts the latency associated with access to storage by 99 percent, and enables four times more transactions per second and six times more concurrent users. Furthermore, when running VDI over 10GbE with RDMA, the system can handle 140 concurrent Virtual Desktops as compared to only 60 when running over 10GbE. This, of course, translates to significant savings in CapEx & OpEx. Actually, one of the most notable deployments of RDMA is in Microsoft Azure where 40GE with RoCE is being used to access storage at a record speed using zero CPU cycles, achieving massive cost of good (COGS) savings.
Network virtualization is a relatively new market trend that requires support for overlay network standards such as VXLAN, NVGRE and the emerging Geneve. These new standards are a must in any modern cloud deployment. However, they put a very heavy load on the server resources, which directly affects performance and prevents the system from exchanging data at full speed. Using advanced NICs which includes VXLAN and NVGRE offload engines, removes such data communication bottlenecks and maximizes CPU utilization and thus, expensive resources in the system don’t need to stay idle, which improves the ROI.
Mobile users expect to be able to access their required information at any time and from anywhere, which means that massive amounts of data must be analyzed in real-time. This requires adopting new scale-out architectures in which the system efficiency heavily depends on interconnect technologies that pave the way toward achieving these goals.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like