Experts Dispute VC’s Forecast that Caused Data Center Stocks to Slump

Little evidence behind prediction of data center providers’ demise brought about by progress in chip tech

Yevgeniy Sverdlik, Former Editor-in-Chief

September 15, 2017

6 Min Read
Racks of servers powered by Tensor Processing Units (TPUs), Google's custom processors for machine learning
Racks of servers powered by Tensor Processing Units (TPUs), Google's custom processors for machine learningAlphabet

The stocks of all seven US data center REITs (there are now six, following a merger that closed Thursday) slid down simultaneously this week, after a well-known venture capitalist and hedge-fund owner said at an investor conference that advances in processor technology will eventually lead to the demise of the data center provider industry.

But industry insiders say his views are overly simplistic, and that history has shown that advances in computing technology only create more hunger for data center capacity, not less.

Since server chips are getting smaller and more powerful than ever, companies in the future will not need anywhere near the amount of data center space they need today, Chamath Palihapitiya, founder and CEO of the VC firm Social Capital, who last year also launched a hedge fund, said Tuesday afternoon, according to Seeking Alpha, which cited Bloomberg as the source:

Word that Google may have developed its own chip that can run 50% of its computing on 10% of the silicon has him reading that "We can literally take a rack of servers that can basically replace seven or eight data centers and park it, drive it in an RV and park it beside a data center. Plug it into some air conditioning and power and it will take those data centers out of business."

Related:Alphabet Q2 2017: Enterprise Efforts Pay Off for Google Cloud

Following the event, called Delivering Alpha and produced by CNBC and Institutional Investor, stocks of data center providers Digital Realty Trust, Equinix, QTS, CyrusOne, CoreSite, DuPont Fabros Technology, and Iron Mountain were down, some just over 2 percent and others over 3 percent.

Alphabet subsidiary Google did release a paper this past April that said its custom Tensor Processing Unit chips, developed in-house, allowed it to avoid building additional data centers specifically for executing neural networks (the dominant type of computing system for AI), but the company said nothing about the implications of TPUs for other types of workloads, which collectively far outstrip neural nets in terms of total computing capacity they require.

But it also revealed in April that it's been using TPUs to run machine learning workloads in its data centers since 2015. Meanwhile, cloud companies as a group (which includes Google) are spending more on Intel chips. Arrival of the TPU has not slowed Google's investment in data centers; quite the opposite. Since the release of the paper, Google announced new cloud data centers in Northern Virginia, Oregon, Singapore, Australia, England, and, just earlier this week (on the same day Palihapitiya made his remarks) in Germany. The company uses a mixed data center strategy, building some of its data centers on its own and leasing the rest from the types of companies whose stocks Palihapitiya’s remarks set in motion.

One of those companies is San Francisco-based Digital Realty, whose shares were down 3.6 percent at one point Wednesday. John Stewart, the company’s senior VP of investor relations, said that nearly every phone call and meeting with institutional investors Wednesday and Thursday started with the investor asking about what the VC had said.

“Andy [Power, the company’s CFO] and I are in New York, meeting with our largest institutional investors, and this topic has come up as basically the first question every single meeting,” Stewart said in a phone interview Thursday.

Worries about advances in computing technology driving down demand for data center space aren’t new; it’s a concern that data center company executives have had to address periodically for many years. Computer chips powering data centers that were built in the last several years are denser (in terms of the number of cores per square centimeter) and more powerful than they’ve ever been; but during the same time, data center providers have seen a boom in demand unprecedented in scale, as companies like Google, Microsoft, Amazon, Oracle, and Uber have been ramping up investment in new data center capacity, some to support their quickly growing enterprise cloud businesses, and some to support growth in the number of individual consumers who use their apps.

Customers including IBM, Google, Apple, Microsoft, Oracle, and Amazon “are spending billions of dollars on incremental new data center CapEx, and they are doing that and signing leases with us for 10 to 15 years,” Power said. “They don’t think their data center’s going to go away.”

Bill Stoller, a financial writer and analyst and regular DCK contributor, said people who run data centers for these large companies are in the position to know the most about their companies’ future demand for data center capacity. “They are entering into long-term contracts for facilities built with today’s technology for cooling and electrical capacity,” he said. “Why would they be entering into 10-plus-year leases if this technology was obsolete. They are on the cutting edge.”

Technological progress has created numerous massive leaps in improving computing efficiency – even outside of semiconductor progress described by Moore’s Law (a growth curve that’s in fact flattening) -- the most recent ones being server virtualization and cloud computing. Neither of those leaps caused a drop in demand for data center space. Outcome of such leaps has been the opposite: more efficient computing has opened up possibilities for new applications that can take advantage of the improvements, driving demand further.

Recent advances in AI, driven to a great extent by the lower cost of processors that can run neural networks, are creating more demand for computing capacity. Servers filled with specialized chips used specifically to train and/or execute neural networks, such as Google’s TPUs, or Nvidia’s GPUs (the most widely used processors for training workloads) require more power per square foot in a data center than CPUs that run most of the world’s software. They are not replacing regular servers in data centers; they’re being installed in addition to them.

“Those higher-density racks generate more heat; they require more cooling; and these are special applications for high-performance computing,” Stoller said. In the vast majority of cases, rack densities are much lower.

Rack density indicates the amount of computing power that can be housed in a single rack and has direct implications for the amount of real estate required to host software. The data center provider business isn’t just about selling space, however; it’s also about selling power, the ability to cool equipment (the higher the density, the more cooling capacity is required for a single rack), and access to networks.

Steven Rubis, VP of investor relations at DuPont Fabros Technology, the data center REIT that specializes in providing wholesale data center space to hyper-scale giants like Facebook, Microsoft, and others, said Palihapitiya’s statements were “an oversimplification. There’s probably more nuance to it; we get this argument from investors all the time.”

Read more about:

Equinix

About the Author

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like