Nvidia Pitches DPUs, Yet Another Way to Free Up the Data Center CPU
Its Data Processing Unit is designed to offload all the virtual data center management work from the CPU.
Nvidia’s specialty has been relieving the burden on a computer’s CPU, be it a PC under a desk in a gamer’s bedroom or a server inside a data center used for training machine-learning models.
Monday at GTC, the annual conference by Nvidia’s data center business (held virtually this year), it fleshed out a strategy for offloading more computing burden from the server CPU, so more of the CPU’s horsepower is left for its primary application workload.
On the hardware side of the strategy are what Nvidia calls DPUs, or Data Processing Units, which are PCIe cards with networking and processing capacity – a lot of it. On the software side is DOCA, a software development kit programmers can use to write applications that take advantage of the DPU’s capabilities.
With much of the data center infrastructure virtualized, server CPUs are tasked with handling server virtualization, software-defined networking, storage management, and security. Most of these things used to be done by dedicated hardware boxes.
All the virtualization means the “data center is the new unit of computing,” Nvidia CEO Jensen Huang said in a pre-recorded GTC keynote video. “But all the data center infrastructure processing in software is a huge tax on CPUs.”
The DPU is meant for offloading all the infrastructure management tasks from the CPU, tasks that can consume as much as 30 percent of CPU cores in a typical data center, he said. “The DPU is a data center infrastructure processing chip.”
Nvidia’s DPUs are based on SmartNIC cards by Mellanox, the high-performance computing network specialist it acquired for $7 billion last year; silicon by Arm, the British processor giant that agreed to be acquired by Nvidia for $40 billion just last month; and its own GPU accelerators.
Nvidia rolled out the first two DPUs on the product roadmap of this new data center strategy Monday: BlueField-2 and BlueField-2X.
When done in software, IPsec, regular expression, packet pacing, and elastic storage all consume some 125 x86 CPU cores to run at 100 Gigabits per second, Huang said. All that work can be done by BlueField-2, which some Nvidia customers are already sampling, he said.
Some basic Nvidia BlueField-2 DPU specs:
6.9B transistors
8 64-bit Arm CPU cores
Dual 16-way VLIW engine
BlueField-2 DPU performance, according to Nvidia:
100 Gbps IPsec
50 Gbps RegEx
100 Gbps video streaming
5M NVMe IOPS
BlueField-2X has everything that’s in BlueField-2, plus Nvidia’s Ampere GPUs, enabling AI functionality that can be applied to security, network, and storage management.
For example, machine learning could be used to identify abnormal traffic on the network, which could signal a breach or a breach attempt. But that compute-heavy AI workload would be handled entirely on the DPU.
BlueField-2X supports CUDA, Nvidia’s collection of software libraries for building applications that take advantage of its GPU accelerators.
nvidia bluefield-2x dpu
BlueField-2X is still under development, but Nvidia expects to see servers with both BlueField-2 and BlueField-2X DPUs to come to market next year. The company said it will launch BlueField-3 in 2022 and BlueField-4 in 2023.
Asus, Atos, Dell Technologies, Fujitsu, Gigabyte, H3C, Inspur, Lenovo, Quanta/QCT, and Supermicro all have plans to integrate its DPUs into their servers, according to Nvidia.
Major enterprise Linux distributions expected to support Nvidia DPUs are Canonical’s Ubuntu Linux and Red Hat Enterprise Linux. Red Hat also plans to support BlueField-2 on OpenShift, its Kubernetes-based container platform.
Check Point Software Technologies is integrating the new hardware into its cybersecurity technologies, Nvidia said.
VMware and Nvidia Rethink the Data Center
One Nvidia partnership that’s sure to put BlueField on many enterprise data center operators’ radars is with VMware. As part of its Project Monterey, announced last week, VMware is working with Nvidia to offload not just networking, security, and storage tasks to the DPU but also the hypervisor itself.
While VMware and Nvidia have worked together for 14 years, the partnership has traditionally been around virtual desktop infrastructure (VDI) use cases for Nvidia graphics cards, Kit Colbert, VP and CTO of VMware’s cloud business unit, told DCK in an interview.
Now the partnership is expanding to go after a whole new opportunity: devising a whole new data center architecture. That’s ultimately the ambition behind both Nvidia’s DPU strategy and VMware’s Project Monterey.
That new architecture is needed because of AI, whose computing requirements are vastly different from what traditional architectures have to offer.
“Now software can write software,” Huang said. “So, AI is the automation of automation.” But “AI requires a whole reinvention of computing, full stack rethinking.”
VMware’s role here is to make AI infrastructure more palatable for enterprise IT shops, where today it’s siloed, managed with its own dedicated tools, Colbert explained. VMware wants to expose the capabilities of Nvidia hardware in vSphere, its server virtualization software that acts as the data center management plane for most of the world’s enterprises.
About the Author
You May Also Like