Power Panel: The Era of Space-Based Data Center Capacity Planning is Over

There’s a sensible-sounding argument that dynamic power demands on data centers’ compute racks, require dynamic cooling systems. But cooling has been, and perhaps always will be, a factor of space.

Scott Fulton III, Contributor

June 17, 2021

8 Min Read
Power Panel: The Era of Space-Based Data Center Capacity Planning is Over

It’s been suggested, in this publication and recently elsewhere as well, that data center cooling systems in their present configuration are not actually capable of adapting to the changing demands of more modern, workload-driven computing environments. The danger in implementing a workload-driven approach for a facility with an older cooling system, the suggestion continues, is that workloads may end up being distributed over greater floor spaces. That could eventually lead to stranded capacity, as the inability to scale up drives the need to compensate instead by scaling out.

The solution, proposed one vendor, is by way of an adaptive cooling system that increases cooling capacity as increasing workloads produce more demands on computing systems. It would depend in larger measure on strategies for removing heat from air through more natural means, integrated into existing facilities. But is such a solution real, and is it realistic?

Data Center Knowledge put the question to four world-class experts. Their answers appear below, verbatim but edited for clarity.

Chris Brown, Chief Technical Officer, Uptime Institute

Chris Brown - Uptime Institute [400 px].jpg

Chris Brown - Uptime Institute [400 px]

It is true that any data center, even if it supports HPC, is designed with an average watts-per-area (square foot or square meter, depending upon region).  This does two things: First, it defines the total cooling load expected at full buildout.  There has to be a ceiling defined, otherwise how do you know how much cooling to put in? Additionally, it defines the cooling strategy.

Related:Uptime Institute Rings Climate Change Warning Bell for Data Center Operators

As density increases, there is a break point where air-based cooling becomes not impossible but impractical.  You can continue to put in more fans, but at some point, it will move so much air and be so noisy it will be akin to being on the tarmac of an airport with jets passing by.  Not terribly practical.  If the airflow is enough, pressure becomes an issue just to open and close doors.  So depending on the density, the design plans will [entail] different solutions.  Some data centers have ducted the exhaust air out of racks straight to the cooling units on 20kW and 30kW racks to address some issues and increase cooling efficiency.  Others go to water-based cooling, and use rear door heat exchangers, liquid cooling of equipment, and even immersion cooling at really high densities. 

Now, from a practical operational standpoint, I have never run across a data center that designed to 200 Watts-per-square-foot, and then installed to that.  In other words, they will have racks with higher densities and racks with lower densities, as long as they do not cross the density threshold where a different cooling solution is needed.  They are always constrained by the installed cooling and power capacity.  Then if they start to move above that, they will install additional power and cooling infrastructure to increase the available power and cooling capacity.  At some point, real estate does become the issue. Adding fans and cooling capacity means more space, and if there is no more space, there can be no expansion in capacity.

Related:Vertiv Lightens Debt Load, Returns to the Public Market

In summary, data center design has been, and will continue to be, a balance between space, power, and cooling.  Different approaches to the power and cooling infrastructure are being employed to increase capacity in a smaller footprint.  But any additions will always require space — it is just a question of it being horizontal or vertical space.  We agree that any data center design should plan for future capacity increases to support demand as density increases (we are not making any more space on Earth and thus we will need to increase density), but there is no way to every decouple the three elements of space, power and cooling, as they will always be tied together. But the design choices can maximize capacity in any given footprint.

Steve Madara, Vice President for Thermal, Data Centers, Vertiv

Steve Madara - Vertiv [400 px].jpg

Steve Madara - Vertiv [400 px]

Bottom line, the rack density situation is increasing. For example, a data hall designed for, say, 6 MW for X rack density (equating to the number of racks in the space) drives the total data hall square-foot area. If rack density goes up, you need less square-foot area. The challenge today is, if you build with too high a rack density, you may run out of rack space before you reach the design capacity.  But also, if you build for the right rack density today, then you may not be using the full square-foot area of the data hall. Whether you are stranding cooling capacity or not depends on how the room was laid out. As density goes up, you have unused floor space.

To provision for higher density racks, generally you need a cooling unit that has more kW of cooling capacity per linear foot of wall space. Are there solutions today with the higher kW per linear wall space? Yes. In non-raised floor applications, we are seeing Thermal Wall/Thermal Array designs that have more coil surface per linear/wall space because the unit is going taller. For raised floor applications, the increased capacity tends to be larger units that are deeper in a mechanical gallery. Is modular add-on a solution? Not necessarily. It will really depend on whether the electrical and additional mechanical systems can support this. If you build all this in on day one, then you are under-utilizing the infrastructure until you grow into it.

Much of this above assumes we are continuing with air-cooled servers. The additional cooling capacity can be supplemented with rear door cooling with minimal power-add. However, the world today is starting to see the advent of a lot of liquid cooled servers — fluid-to-the-chip. An existing data hall can easily add the capacity to provide fluid to the rack for the additional cooling load, and the remaining air-cooled cooling capacity will meet the remaining air-cooled load. The challenge now is that you may run out of power for the additional load in the data hall.

Changing metrics do not change the methodology of provisioning cooling. It’s knowing the future roadmap for air-cooled server capacity and future liquid-cooled cooling capacity requirements, and planning for the transition.  There are many customers today that are building that flexibility and those solutions, for when the transition occurs. No one can predict when, but the key is a plan for the future density.

Steven Carlini, Vice President, Innovation and Data Center, Schneider Electric

Steven Carlini - Schneider Electric [400 px].jpg

Steven Carlini - Schneider Electric [400 px]

Most designs today are based on rack density.  The historical method of specifying data center density in Watts-per-square-foot provides very little useful guidance for answering critical questions that are faced by data center operators today. In particular, the historical power density specification does not answer the key question: ‘What happens when a rack is deployed that exceeds the density specification?’ Specifying capacity based on rack density helps assure compatibility with high density IT equipment, avoid waste of electricity, space, or capital expense, and provide a means to validate IT deployment plans to the design cooling and power capability.

Dr. Moises Levy, PhD., Principal Analyst, Data Center Power and Cooling, Cloud & Data Center Research Practice, Omdia

Energy consumption in data centers is all about workloads! Workload is the amount of work assigned to an IT equipment in a time period, including IT applications such as data analytics, collaboration, and productivity software. In addition, workloads not producing business value contribute to waste and energy consumption inefficiencies.

Let’s understand how workloads impact data center energy consumption and cooling capacity. Workloads can be measured in different ways, as jobs per second or FLOPS (floating point operations per second). Next, we need to measure or estimate the server utilization, which represents the portion of the capacity used to process workloads. This is the ratio of the workload processed to the processing rate. We also need to measure the server power requirement, or estimate it based on its utilization (0% to 100%) and considering a scale between the idle and maximum power. The heat generated needs to be extracted by the cooling system. The cooling capacity can be estimated by dividing the server power requirement by the SCOP (Sensible Coefficient of Performance) for the cooling system. Workload management is not a simple strategy, and we must plan for a successful outcome!

Dr. Moises Levy - Omdia [400 px].jpg

Dr. Moises Levy - Omdia [400 px]

We have been cooling servers via convection, but air cooling is reaching a limit with the higher power densities. Using air cooling system for power densities higher than 10 or 20 kW per cabinet is already inefficient, and the limit is about 40 kW per cabinet. Rack densities have been increasing in the last years, and now we can reach 50kW, 100 kW or higher per cabinet. A liquid-cooled approach is a way to extract heat more efficiently and sustainably.

In summary, at a data center there is high coupling between servers and their physical environment, which generally means that processing more workload implies higher server utilization, and more energy consumption. This translates into an increased requirement to dissipate the generated heat.

Cover photo, featuring the OpenLab data center at CERN in Switzerland, circa 2010, photographed by Hugo van Meijeren, licensed under GNU v. 1.2.

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like