Is CXL the Answer to Data Center Performance Issues?
Data centers face numerous challenges, including issues surrounding memory. Can CXL solve these problems?
Undoubtedly, data centers — the facilities that house the IT infrastructure for storing and managing the data associated with an organization's products and services — are pivotal to the digital transformation wave currently sweeping across enterprises. The data centers of the past were mostly on-premises, located in a physical place. Today, however, many organizations are shifting to the cloud, with Gartner estimating that 80% of enterprises will shut down their traditional data centers by 2025.
Nonetheless, the research firm also admits that "not everything is moving to the cloud," with some organizations preferring to use a mix of traditional data centers and the more modern hyperscale data centers run by popular cloud services providers like Google Cloud Platform, Amazon Web Services, IBM Cloud, Microsoft Azure, and Oracle Cloud Infrastructure.
But whether on-premises or in the cloud, today's data centers are rife with challenges — including high management costs, high latency, carbon emissions that are making a huge environmental impact, and speed and memory allocation issues.
Lewis-Omdia
Could CXL be the answer to the challenges that come with large-scale deployments in data centers? Ronen Hyatt, founder and CEO at UnifabriX — the Israel-based company that claims it's "enabling data center operators to fully unlock their infrastructure's performance, density, and scale" — believes so. Hyatt, who was one of the founders of Intel's infrastructure processing unit (IPU), which is now being rolled out at Google's data centers, told DataCenterKnowledge that the entire data center market believes the major pain point for data center operators today centers around memory: the cost of memory, memory shortages, bandwidth challenges, memory stranding, and more.
"When you look critically into the cost, latency, speed, performance, and environmental challenges associated with data centers today, you'll discover that memory is really at the heart of these problems," he said.
To that end, Hyatt noted, UnifabriX developed its CXL-powered "smart memory node" solution to address those challenges head-on. But what exactly is CXL, and how does it work?
What Is CXL?
CXL
Compute Express Link (CXL)
Compute Express Link (CXL) is an open interconnect standard for enabling efficient and coherent memory access between a host, like a processor, and a device, such as a hardware accelerator or smart network interface card (NIC). The CXL standard aims to tackle what is known as the "von Neumann bottleneck," in which compute speed is limited to the rate at which a CPU can retrieve instructions and data from the storage's memory.
CXL solves this problem in several ways. It takes a new approach to memory access and how data is shared between multiple computing nodes. It helps to disaggregate memory and accelerators, enabling data centers to be fully software-defined. "CXL technology could significantly influence the future server architectures," said Aaron Lewis, an analyst at Omdia's Cloud and Data Center Research Practice. Specifically, it can reduce the memory cost in servers, all while meeting capacity and bandwidth requirements.
Lewis also noted that a significant share of the motherboard area gets used for memory. With CXL memory disaggregation, memory resources can be treated like storage drives or PCIe cards in physical form factor. That could make server designs more compute-dense and limited primarily by thermal factors, rather than a lack of motherboard real estate.
The Promises of CXL
Because CXL enables the creation of memory pools and the disaggregation of those pools from the processor, it allows "applications that are typically bound by memory limitations — like an in-memory database — to add a big chunk of memory, without having to also scale out by adding additional processors," Matt Bryson, analyst and senior vice president at Wedbush Securities, told DataCenterKnowledge. Further, by having a shared pool of memory, instead of larger local memory repositories, "organizations can potentially better allocate memory in line with typical application requirements, while using access to the pool when memory needs spike, thereby increasing utilization rates," he said.
Hyatt believes that memory isn't just the No. 1 contributor to the cost of running the data center infrastructure but is also the biggest contributor to the power it uses. "If you take server power consumption and you do a breakdown, you will find out in many cases — even in typical servers in a data center — that the memory within the server consumes more power than the CPU," he said.
Another promise of CXL is carbon footprint reduction in data centers across the globe — a promise that Hyatt claims UnifabriX's CXL-powered smart memory node solution is already bringing to reality.
UnifabriX's solution already enables shared memory pools, which isn't true of all solutions yet, Bryson said. This is because it offers a full-rack-level CXL solution, as opposed to just one part of what's needed to implement CXL, a chief differentiator of its solution, according to Bryson. "Because they own everything within the rack, they should be better able to address some of the compatibility challenges in implementing new solutions versus certain competitors who might just offer a CXL network interface for instance," he explained.
The Future of CXL
While Hyatt agrees that CXL is a new type of technology and will need some time to be fully adopted in the market, he said it will become a staple in the near future, as it will create the opportunity for new architectures. Some segments in the market will be faster to adopt this new technology — and there will be followers, he said.
"I think it certainly becomes at least a standard for memory connectivity and that shared pools of memory become the norm — at least in hyperscale/cloud environments. It's just a question of timing," Bryson said. "Potentially, though, you could see CXL become the general standard for component connectivity within the rack (e.g., connecting processors as well as DRAM, NAND, etc.)."
About the Author
You May Also Like