Intel and Google Cloud Launch New Chip for Data Centers
Intel and Google have teamed up on a new chip designed to make data centers more secure and efficient.
October 13, 2022
Intel Corp and Alphabet Inc’s Google Cloud on Tuesday launched a co-designed chip that is meant to make data centers more secure and efficient.
Google continues its trend of creating custom chips for data center operations and analysts state that custom-designed chips are an ongoing trend and they’re speeding up adoption of bare metal.
The E2000 Chip, code-named Mount Evans, takes over the work of packaging data for networking, a task usually handled by expensive central processing units that do the primary computing, Reuters reports.
The E2000 also offers better security between different customers that may be sharing CPUs in the cloud, said Google's vice president of engineering, Amin Vahdat, in the report.
The deal demonstrates the insatiable need for more and more power, said Mark Granahan, co-founder and CEO of iDEAL Semiconductor.
"Of course, the form factor must stay fixed in order to be compatible with existing enterprise infrastructure," he said. "So power density increases which in turn generates more demand for higher efficiency power semiconductors."
Custom designed data center chips for hyperscalers are the new normal, said P.S. Subramaniam, a partner in the operations and performance practice of global strategy and management consultancy Kearney.
And it's also an acceleration of the trend to bare metal.
"Modified chip designs is a continued migration of what has already been done in transceivers, network gear and other electronics," he shared. "If you are an OEM supplying to data centers, really think about scale, margins and competitive positioning if the data center company decides to design your component."
For example, specialized semiconductor chips can be designed to optimize for logic efficiency, for compute power, for heat dissipation, or for utility consumption, he said.
"In many ways, this could be seen as the advent of the age of hyperscaler chip design," he added.
Google Cloud is offering the E2000 in a product called C3 VM, which will be powered by Intel’s fourth-generation Xeon processors, and Intel will be able to sell the chip to other customers, said Nick McKeown, senior vice president and general manager of Intel’s Network and Edge Group, in a statement.
The chip collaboration with Intel is new, but Google already has a history of designing its own chips for specific use cases in its data centers.
“Application-specific chip designs have been the backbone of many unique industry needs for some time,” said John Waite, vice president of global supply chain at consulting firm Genpact.
Google has already designed the Video Coding Unit, or VCU, for compressing video content, the Tensor Processing Unit, or TPU, which is an ASIC for AI workloads, and the Titan chip for security.
“Whether in the financial sector, climate analysis, global supply chain optimization, risk mitigation, or the cadre of big issues facing C-suite leaders, customized, application-specific compute-based chips will be in high demand,” Waite told Data Center Knowledge.
Custom Arm Data Center Chips Already Available
Intel and Google aren’t the only companies manufacturing chips specifically meant to improve data center performance.
Several companies are already creating Arm — or Advanced RISC Machines — chips for the server market, including Amazon’s ARM-based Graviton processors, and Huawei’s ARM-based Kunpeng CPU used by the company in its cloud business.
ARM is actively promoting the use of its chips for the server market with last year’s release of the Arm Neoverse V1 and Arm Neoverse N2 chips, which were designed to help the company’s chip manufacturing partners compete against Intel’s third-generation Xeon chips.
The strategy looks like it’s paying off, as ARM’s server market share has risen from 5.4% at the end of 2021 to 7.1% in the second quarter of this year, according to Omdia.
The co-designed chip might be a first for Intel and Google, but we’re likely to see more of the same in the future, according to Waite.
“With the insatiable compute demand, and advent of cloud-based scaling, we will begin to see even more collaboration of high-performance chip designs and the hyperscale providers in the future,” he said.
Read more about:
Google AlphabetAbout the Author
You May Also Like