Google Brings Liquid Cooling to Data Centers to Cool Latest AI Chips

TPU 3.0 is 8x more powerful than last-year's chip

Yevgeniy Sverdlik, Former Editor-in-Chief

May 8, 2018

2 Min Read
Google CEO Sundar Pichai speaking at Google I/O 2018
Google CEO Sundar Pichai speaking at Google I/O 2018Google I/O live stream

Alphabet’s Google has for the first time introduced liquid cooling in its data centers to cool the latest processors that underpin AI capabilities in everything from the latest Gmail updates to upcoming capabilities in Google Photos.

Alphabet CEO Sundar Pichai announced the next-generation TPU 3.0 chip in his keynote at the company’s annual I/O conference in Mountain View, California, Tuesday.

TPUs, or Tensor Processing Units, “are driving all the product improvements you’re seeing today,” Pichai said. “These chips are so powerful, that for the first time we’ve had to introduce liquid cooling in our data centers.”

The new chips are installed in “giant pods,” he said. “Each of these pods is now 8x more powerful than last year – it’s well over 100 petaflops, and this is what allows us to develop better [machine learning] models, larger models, more accurate models, and helps us tackle even bigger problems.”

pichai_20tpu_203_20google_20io_20pod_0.jpg

Alphabet CEO Sundar Pichai shows a photo of a liquid-cooled TPU 3.0 pod inside a Google data center at I/O 2018

Pichai announced first-generation custom TPUs at I/O in 2016. The company has since started offering access to TPUs as a cloud service for external customers, in addition to GPUs, which are commonly used to train neural networks for AI applications.

Google’s chief executive didn’t reveal much detail about the latest-generation TPUs or how the cooling systems for them are designed, but judging by the photo of TPU 3.0 he displayed during the keynote, the system brings chilled liquid directly to the chip via thin tubes.

Neural networks are trained using highly dense clusters of GPUs or, in Google’s case, TPUs. These are extremely powerful processors that consume a lot of power. Because the clusters are so power-dense, they require cooling approaches similar to those used in the supercomputer industry, such as bringing liquid directly to the chip, instead of the more traditional approach of cooling by pushing cold air through the servers.

google_20tpu_203_20row_20of_20pods.jpg

Row of liquid-cooled TPU 3.0 pods inside a Google data center

Read more about:

Google Alphabet

About the Author

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like