Intel: World Will Switch to “Scale” Data Centers by 2025

Computing infrastructure is changing, and Intel is fighting to remain at the forefront of the change

Yevgeniy Sverdlik, Former Editor-in-Chief

April 22, 2016

6 Min Read
The Intel logo is displayed outside of the Intel headquarters in Santa Clara, California, in 2014.
The Intel logo is displayed outside of the Intel headquarters in Santa Clara, California, in 2014.Justin Sullivan/Getty Images

Editorial-Theme-Art_DCK_2016_April.png

The set of factors companies have to consider when devising their data center strategy has changed drastically. They have to take into account things like cloud services, mobile workforce, delivery of services at the network edge on top of the traditional requirements to maintain uptime and anticipate growth in capacity requirements. This April, we focus on what it means to own an enterprise data center strategy in this day and age.

One Intel forecast about the future of computing and data centers helps put the company’s restructuring announcement this week in perspective. Between 70 and 80 percent of systems going into data centers ten years from now will be what the processor giant calls “scale computing” systems.

“We see that the world is moving to scale computing in data centers,” Jason Waxman, an Intel VP and general manager of the company’s Cloud Platforms Group, said in an interview. “Our projection is between 70 and 80 percent of the compute, network, and storage will be going into what we call scale data centers by 2025.”

Scale data centers are data centers designed the same way web giants like Google, Microsoft, and Facebook design their facilities and IT systems today. Intel isn’t saying most data centers will be the size of Google or Facebook data centers, but it is saying that most of them will be designed using the same principles, to deliver computing at scale.

Read more: Google to Build and Lease Data Centers in Big Cloud Expansion

Things like the three major forms of cloud computing (IT infrastructure, platform, or software delivered as subscription services), connected cars, personalized healthcare, and so on, all require large scale. “If you’re doing a connected-car type of solution, that’s not a small-scale type of deployment,” Waxman said. “If you’re doing healthcare and you’re trying to do personalized medicine, that’s a large-scale deployment.”

These solutions, which require an approach to infrastructure that’s different from what most companies are used to, are on the rise, and a substantial portion of the world’s IT capacity already sits in scale data centers. “Right now, about 40 percent is already there, so you’re talking about a continued move toward deploying technology at scale,” Waxman said.

[Tweet Intel: 70-80% of compute, storage, and network gear will go into scale data centers by 2025]

Intel Restructures to Focus on Cloud, IoT

This shift will affect virtually every industry and it is a big opportunity for Intel, which is looking at the data center market as its best bet going forward, faced with slowly but steadily dwindling revenue from PC parts and a weak position in the mobile chip market.

The company’s execs were upfront about this on this week’s first-quarter earnings call with analysts, when they announced the restructuring plan.

Intel is shifting from being primarily a PC company to a company that powers the cloud and connected devices, the so-called Internet of Things, CEO Brian Krzanich said on the call. Already, “40 percent of our revenue and 60 percent of our margin comes from areas other than PC,” so it’s time to push the company all the way toward pursuing a non-PC-focused strategy, he said.

The restructuring plan, which includes letting go of 12,000 people, or about 11 percent of Intel’s workforce, is meant to free up resources to invest in data center, IoT, and memory segments, the three fastest-growing businesses within Intel, Krzanich said. The company expects to free up $750 million this year and $1.4 billion every 12 months by mid-2017 as a result.

FPGAs Expected to Accelerate More than Servers

The strength of Intel’s position in the cloud infrastructure market is undeniable. Its chips power virtually every cloud server in the world, and it has collaborated with the big scale data center operators on chip and server design for years, customizing solutions for their needs, so it has a lot of unique insight into the needs of scale infrastructure.

One of the key technologies Intel expects to accelerate growth in its data center business are Field Programmable Gate Arrays, which became a focus for the company during its collaboration with big cloud providers, namely Microsoft, which about two years ago started looking into combining CPUs with FPGAs. These are programmable chips that enable companies to reconfigure servers quickly to optimize them for different types of workloads and to offload some processing work from CPUs, a method known as workload acceleration, used widely in supercomputer design.

Diane Bryant, general manager of Intel’s Data Center Group, talked about a hybrid chip that would combine Xeon E5, its flagship server CPU, with an FPGA in 2014. The company doubled down on FPGA investment last year with a $16.7 billion acquisition of FPGA specialist Altera. Just recently, it started shipping the first samples of single-socket Xeon/FPGA packages to customers, Krzanich said this week.

Intel developed the product together with multiple large cloud providers, and “Microsoft was a definitional customer for them,” according to Waxman.

Beyond the Server Chassis

From a technology standpoint, Intel is looking at what the fundamental building blocks of scale data center infrastructure will look like, Waxman said. The days of that fundamental building block being an entire server, complete with motherboard, memory, storage, network, cooling, power supply, network cards, etc., are coming to an end.

Scale computing operators think in terms of entire racks, where resources are optimized for specific applications, and where individual components, not entire servers, can be swapped out when needed. “We think the need to design rack-level solutions that allow you to create pools of compute, network, and storage that can be provisioned by a cloud is more important than it has ever been,” Waxman said.

A lot of the work Intel has done on rack-scale architecture has been done in collaboration with Facebook and the Open Compute Project, the open source hardware and data center design initiative Facebook founded in 2011. Waxman has been on the OCP board since its inception, and the vision Intel had when it joined the project back then has largely come true, he said. That vision was that the world would eventually shift to scale computing, and Facebook and other OCP members were at the forefront of that shift.

Toward a 100 Percent Scale IT World

So how will this shift happen exactly, and what will it look like for smaller enterprise IT shops?

It doesn’t mean every IT team will start deploying web-scale infrastructure in their data centers. What they do will depend on the nature of their business. Companies whose infrastructure is at the core of the business and who need full control of it will go the scale data center route, no matter how small. These are companies like engineering services firms, Software-as-a-Service providers, companies in healthcare or other compliance-sensitive verticals, Waxman said. And companies whose business doesn’t revolve around IT infrastructure will eventually replace on-premise IT with various cloud services, he said. Either way, most of the world’s software will end up running in scale data centers.

About the Author

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like