Google: No 'Secret Sauce' in Recipe for Efficiency

Google's PUE ratings keep getting better, with one of its data centers hitting 1.09 in the 1st quarter. Google's Chris Malone says PUEs of 1.5 to 1.6 can now be achieved by data center operators using off-the-shelf equipment. "There'sno magic or secret sauce to get to these levels," said Malone.

Rich Miller

May 11, 2011

3 Min Read
DataCenterKnowledge logo in a gray background | DataCenterKnowledge

google-datacenterpue

A chart showing onoging improvement in Power Usage Effectiveness in data centers at Google.

Chris Malone didn't come to the Uptime Symposium to reveal Google's stealthy strategies to make its data centers super-efficient. Instead, Malone's message was that no state secrets are required to make your facilities much more efficient - although perhaps not quite as efficient as Google's highly-customized infrastructure.

How efficient are Google's facilities? In the first quarter of 2010, Google's data center fleet had a collective Power Usage Effectiveness (PUE) of 1.16, with one data center recording a PUE of 1.09. Those are the best showings in the four years in which Google has tracked its data centers using PUE, a metric that compares a facility’s total power usage to the amount of power used by the IT equipment, revealing how much is lost in distribution and conversion.

Best Practices Are A Crucial Tool

Malone said that broader sharing of industry best practices and improvements in vendor technology have made it easier for companies to be much more efficient.

"A PUE of less than 1.6 should be achievable with off-the-shelf technology," Malone said Tuesday in one of the afternoon keynotes at The Uptime Symposium 2011 in Santa Clara, Calif. "There's no magic or secret sauce to get to these levels. It depends on your motivation."

The low PUE numbers reported by Google, Facebook, Yahoo and Microsoft have prompted mixed reactions n the data center industry. Some critics have noted that these companies build data centers that are highly customized, making their efficiency strategies less relevant for smaller companies.

In a series of presentations over the past several years, Malone and other Google engineers have argued that much of their efficiency comes from implementing industry best practices. Not every data center can achieve a PUE  of 1.1, Malone said, but a PUE of 1.5 is within reach for most data center operators.

Cooling, Power Distribution Are Top Targets

A focused strategy to improve PUE should target cooling and power distribution, two areas that offer significant potential efficiency gains. On the cooling front, Malone recommended an analysis of data center airflow, taking steps to address airflow oversupply, using economizers to reduce dependence on power-hungry chillers, and raising the temperature in the data center. On the power distribution front, Malone said data center operators should minimize power conversion steps and install high-efficiency UPS units.

Adopting new hardware can yield substantial gains. "In the last few years,a lot fo good close-coupled cooling products have emerged," said Malone. "There are many UPS systems now that provide efficiencies above 95 percent."

Malone said that using the best commercial hardware and implementing industry best practices could result in a PUE between 1.124 and 1.51.

That conclusion was backed up by two data center executives who have achieved low PUEs by retrofitting older facilities with new equipment, guided by best practices. Steven Press, the Executive Director of Data Center Services at Kaiser Permanente, said his team achieved large efficiency gains n projects at five legacy data center. Juan Murguia, an IT manager at CEMEX in MExico, said similar strategies helped his company reach PUEs of 1.5 to 1.6.

Google Optimizes its Network Gear

To demonstrate the process, Malone reviewed a case study of a project to improve the efficiency of networking centers operated by Google.

The five networking centers all had PUEs of 2.2 to 2.4. With an investment of $20,000, Google was able to slash those PUEs to1.5 to 1.6. The project included an airflow analysis, optimizing the layout of floor tiles to direct cooling, eliminating the mixing of hot and cold air with vinyl curtains , reducing the cooling output to eliminate oversupply, and raising the temperature to 27 degrees C (81 degrees F).

"These kind of best practices have been proven to work for us at all scales," said Malone. "These are techniques that you can deploy. You don't need an in-house research and development team to do this."

Read more about:

Google Alphabet
Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like