AMD Overhauls Its Epyc Server Processors, Aiming Once Again for the Sweet Spot

The Milan launch focused on supercomputing, but a chat with engineers shows that the company really wants the mid-range market back.

Scott Fulton III, Contributor

March 16, 2021

6 Min Read
Lisa Su, AMD president and CEO, speaking at the third-gen Epyc (Milan) server processor launch.
Lisa Su, AMD president and CEO, speaking at the third-gen Epyc (Milan) server processor launch.

Rather than an upgrade to the previous generation, there’s a completely new design behind what AMD is calling the third generation of its Zen server architecture. Reinvigorated from having doubled its revenue over its previous fiscal year, and having rethought its server processor design from the ground up, the company officially launched the 3rd generation AMD Epyc line (Epyc 7003 series), codenamed “Milan,” Monday.

“Our new Epyc processors extend our leadership in server CPUs, and not by a small amount,” declared Lisa Su, AMD’s CEO and the face of the company’s resurrection after its downward slide toward obscurity in 2012.

“Third-gen Epyc is simply the best server processor available,” she continued. “We deliver more performance, best-in-class scalability, and differentiated security. It actually increases our lead in overall performance-per-socket, enabling maximum compute density, and it also now delivers per-core performance leadership.”

Though the event was called a “launch,” AMD had already sold plenty of the highest-performing third-gen AMD Epyc SKUs in its new line to cloud service providers. Cloud platforms announced they are serving all-new VM instances based on 7003-series to their own customers at this very moment.

AMD 7003 series-hosted Azure VMs are available with its new HPV3 instances; Google Cloud Platform announced a similar feature for its GKE Kubernetes-orchestrated workloads, run on C2D and N2D instances; and Oracle is making 7003 hosts available for its new E4 cloud compute instances.

Related:Why Google Cloud Turned to AMD to Solve for Runtime Encryption

On-Chip Key Management Expanded

Among the most anticipated new features of third-generation AMD Epyc are a set of security functions which, when combined, enable cryptographic isolation, the ability to encrypt and secure all data on the bus, including in memory and in CPU caches.

AMD Milan’s on-chip key management, explained Noah Beck, AMD Fellow and senior SoC architect, will be used in encrypting the contents of DRAM as they traverse DIMM memory modules. For the Secure Memory Encryption (SME) feature, “you can encrypt all the system memory with a single key, that’s generated on any reset,” he said. “So, from one reset to the next, you cannot see what the previous contents of DRAM were.”

At least theoretically for now, this would eliminate a common vector for server exploitation: triggering a fast soft reset and then leveraging infiltrated startup code to read the non-decayed contents of the previous session’s memory without need of privilege.

AMD Epyc 3

amd epyc 3 slide 1

For Secure Encrypted Virtualization (SEV), Beck continued, a single key is used for the hypervisor and a separate key is delegated for each guest VM. Keys are managed by the AMD Secure Processor, he said, which is a separate operating unit that is physically separated from the CPU. That unit will continue to manage and maintain up to 509 simultaneous in-memory keys, as did second-generation AMD Epyc.

Related:Is Xilinx Stronger with AMD or Without It?

As a result of SEV going live, cloud service providers who host their VMs on Epyc 7003 series can offer what they’re calling “Confidential Computing.” Some already began doing so Monday in public preview. These are instances that are completely encrypted, both throughout the network and inside the host.

During Monday’s presentation, Microsoft Azure executive VP Jason Zander explained, “Confidential Computing builds on the strong encryption-at-rest and in-transit capabilities to keep your data encrypted all the way to the CPU. Customers can easily take advantage of this added protection for their most sensitive workloads without the need to rewrite or recompile them. With Confidential Computing, the runtime state of your VMs is fully encrypted by CPU-generated keys, so the contents of the virtual machine are opaque to administrators.”

Third-Gen AMD Epyc Performance – Tale of the Tape

AMD claims about performance of its new line of server chips are based on performance benchmark scores, some of which have yet to be verified by SPEC.org, an organization that maintains a public registry of posted scores it has independently substantiated. According to performance figures SPEC did publish earlier this year, a Dell PowerEdge model R7525 equipped with dual 64-core, dual-threaded, second-generation Epyc 7H12 processors clocked at 2.6 GHz scored a 543 on SPEC’s most recent battery of floating-point tests, SPECrate2017_fp_base [PDF].

As of late last February, AMD claims, a pair of 64-core third-generation Epyc 7763 CPUs – at the very top of the new line — running on a Lenovo ThinkSystem SR665 scored 636 on this same floating-point test battery.

Browsing through SPEC’s posted scores, DCK discovered that just last December an Intel Xeon-based Fujitsu Primergy model RX4770 M6 posted a score of a 609 on SPECrate2017_fp_base, using four 28-core Xeon Platinum 8376HL CPUs to reach that score [PDF].

That’s 128 AMD Epyc cores versus 112 Intel Xeon cores, suggesting that benchmark performance could be tighter if you pair small clusters of AMD’s top-of-the-line against larger clusters and smaller core counts from Intel’s middle-of-the-road.

While the launch event was all about the top of the heap, in an earlier briefing with reporters, AMD engineers touted their new series’ performance in the middle of the pack. In an earlier era of AMD dominance over Intel, a few years after the dawn of the multicore era, that’s where AMD grabbed a beachhead and wouldn’t let go.

AMD Epyc 3

amd epyc 3 slide 2

Integer arithmetic performance levels are higher than floating-point levels, and integer test scores often reveal how well CPUs will perform with AI and neural-network workloads, which often use integer math to boost speed. A server equipped with Intel’s 16-core Xeon Silver 4216 CPUs is comparable with second-generation AMD Epyc 7282 when scored on SPECrate2017_int_base, AMD director of Epyc product management, Ram Peddibhotla (previously with Qualcomm), explained.

DCK located a score of 193 for a Tyrone Systems DS400TE1-224R with two dual-threaded 16-core CPUs at 2.1 GHz, which appears to correlate with this chart [PDF]. AMD now claims its 7352, which replaces the 7282, outperforms the Xeon 4216 by about 40 percent. AMD did not provide raw score numbers for the 7352, nor were we able to locate posted scores with SPEC at press time.

However, AMD does claim the score for the third-generation AMD Epyc 7532-based server is 434 on the same integer test. SPEC shows it has obtained this score from Lenovo, which used a ThinkSystem SR645 with 2 32-core 7532s clocked at 2.4 GHz [PDF]. AMD cited a similarly equipped Intel Xeon Gold 6258R-based server posting a SPECint score of 309 [PDF].

The middle-tier scores may be what counts in the end for third-generation AMD Epyc’s long-term market success. These are the SKUs that are chosen for inclusion in the general-performance servers used by hyperscale platforms and smaller cloud service providers. It’s models like the 7532 that you’re more likely to find sold in “trays” for service shops, rather than the 7763.

Back in the 2000s, when AMD was pitting Opteron against Xeon, it would intentionally mark down prices for its “most optimized” or “most balanced” processors, creating savings of hundreds per unit versus similarly equipped Intel models. Note the four curious orange stars on AMD’s chart above. Although pricing data has yet to be released, these may be the SKUs most likely to be discounted, should AMD be planning a similar market strategy now.

AMD Epyc 3

amd epyc 3 slide 3

“No matter how you look at it,” said Peddibhotla, “third-gen Epyc delivers leadership performance across the stack, across all core-count boundaries.”

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like