Dell's Latest PowerEdge Servers Offer Muscle for High-Bandwidth Workloads
Enlists SAP to help make the case for new server models as faster processing engines for high-bandwidth data workloads
With processors no longer the reliable wellsprings of periodic performance boosts they were in the past, server manufacturers are looking to more specialized use cases. Today, Dell is announcing a revamped PowerEdge R930 with Intel’s newly announced Xeon E7 v4 series processors and a new PowerEdge R830 with recently announced Xeon E5 v4 processors.
In another era, new servers with new processors would be the new story. Today, to give the new product a boost, it needs more of a use case. So Dell is making the case for both new PowerEdge models as faster processing engines for high-bandwidth data workloads while enlisting SAP to help make it.
PowerEdge R930
The newly revised, top-of-the-line PowerEdge R930 with Intel’s E7-8800 series v4 processors, said Brian Payne, executive director for PowerEdge marketing at Dell. The server “is the product we position as having the most demanding, data-intensive applications. It’s great for mission-critical databases that need that high performance and can also be an excellent platform for consolidating a lot of virtualized data-oriented workloads.”
R930 will maintain its 4U chassis with 10 PCIe slots, one RAID slot, and one network daughter card (NDC) slot. Support for up to 96 DIMMs enables as high as 12 TB of memory, as before. Models will be available with E7-4800 v4 processors. But swapping out the E7-8800 v3 series for a E7-8800 v4 processor, Payne promises, will generate “world record performance” with SAP database workload benchmarks.
It’s SAP that keeps track of benchmarks involving its own NetWeaver 7.31 solution stack and SAP HANA in-memory database; and it will be SAP that makes the final results public for the tests to which Payne refers. In May 2015, a PowerEdge R930 with 4 Xeon E7-8890 v3 processors scored 320,940 ad hoc navigation steps per hour, on SAP’s business warehouse simulation, Enhanced Mixed Load (BW-EML), with a 1 billion record test battery. Then the following September, it scored 191,170 steps per hour on a similar 4P R930 model, using the NetWeaver 7.40 solution stack, and SAP HANA 1.0, with the 2 billion record test battery. According to SAP, these were indeed the fastest scores reported up until the E7 v4 processors arrived on the scene.
Though the final numbers had yet to be made public at the time of this writing, Dell’s estimate of how much better the revamped R930 performed on the 2 billion record battery would give it a performance score of 238,523 steps per hour.
In the BW-EML test, simulated users generate synthetic queries by logging onto the Web client, performing about 40 ad hoc navigation steps, and logging back off. A new user gets added to the workload every second until the benchmark reaches a “high load phase,” which then continues running for at least one hour. Then the workloads are powered down slowly, and a steps-per-hour figure is calculated for the one-hour high-load phase.
Why is this important? With the laws of physics catching up with how Intel processors tend to steadily increase performance, Moore’s Law appears to be facing a dead end after the Broadwell generation. Both Intel and server makers need all the help they can get in keeping up appearances, so they’ve resolved to accomplish this by defining performance in more real-world terms and demonstrating performance increases in contexts that real-world users may more readily appreciate.
“Larger core counts drive total cost of ownership down, and improving the per-core performance increases your response times,” said Lisa Spelman, Intel’s VP and general manager for Xeon and data center products, during a company presentation last March. “In 2015, we saw more than 80 percent of our top cloud service provider volume upgrading to higher-performing SKUs in our lineup. They move to higher core-count CPUs to get that better response time and greater TCO efficiency.”
PowerEdge R830
Spelman made that statement while unveiling her company’s Xeon E5 v4 series processors for two-socket systems. New 4-socket E5 v4 SKUs are being put to use in Dell’s all-new PowerEdge R830. R830 takes over the top of Dell’s 2U rack-mount chassis, higher-performance line, with a four-socket system built on Xeon E5-4600 v4 series processors, supporting up to 48 DIMMs.
“This product is positioned as an ideal combination of density and performance capability,” said Payne about the R830. “When you’re looking at somebody who doesn’t necessarily have a need for that top-end performance and scalability, or who has a space constraint, in some cases, they’re looking at a more dense solution. This is a product category that Dell created: a 2U four-socket delivering this level of density. We’ve had a tremendous amount of success with this product over time.”
“One of the greatest strengths of the Xeon E5 product line is that versatile workload performance across the widest range of workloads,” said Intel’s Spelman. “We’re delivering these increases in performance at the same power envelope from our previous generation, so you’re getting increases in your compute, your storage, and your networking of up to 44 percent.”
Scaling Up in a Scale-Out Era
Dell will be selling its new PowerEdge models into a data center market that is simultaneously being sold on the idea of more highly distributed workloads — specifically, making database operations more “liquid” and spreading them out across server nodes and cores to increase efficiency. That message — which is coming from the database and cloud communities — runs almost counter to Dell’s message, which paints a picture of huge bundles being managed adroitly by dense processor packages.
So whose picture of the data center is more realistic?
“We are absolutely, one hundred percent, behind and investing in scale-out architectures,” said Payne (whose employer is in the midst of purchasing EMC), “and third-platform, or new approaches to developing applications that are designed for scale-out. In fact, our legacy in the data center solutions phase of building out the largest, most efficient data centers in the world, where folks like Microsoft, Amazon, Facebook, etc., are building applications and database tiers the way you’ve described. We’ve optimized infrastructure in those environments, and will absolutely continue that in the future.
“That being said,” he continued, “there’s still some traditional applications which are going in a different direction, which still can benefit from scale-up or consolidation, based on the way that they’re operated.” Payne counted the style of workloads that the SAP benchmark best simulates, as among the category best served through a scale-up architecture.
For this reason, he explained, external NAS arrays remain relevant; and performance boosts for workloads on these platforms should still be considered from a scale-up perspective — which has been the side of the proverbial bread that Intel has traditionally buttered.
But it’s those watchwords — “tradition,” “legacy,” and “data warehouse” — that are used more and more frequently to decorate the marketing messages for scale-up product cycles. While that strategy may work for now, the fundamental changes that are still taking place at the software platform level will inevitably compel both Dell and Intel to look for new avenues for Moore’s Law, or some other reliable “law” of performance boosting, to be exploited.
About the Author
You May Also Like