Insight and analysis on the data center space from industry thought leaders.
Mashup: Monitoring Data Center Power and Cooling Simultaneously
Implement a centralized system that can make sense of all the information and, ideally, offer recommendations or even take steps to make improvements.
March 24, 2017
Coy Stine is Vice President, Data Center Division for Fairbanks Energy Services.
Data center facilities of all types must monitor power and cooling data to be responsive when things go wrong, but they don’t often analyze combined data to obtain operational efficiencies. While a data center’s function is to protect the servers and information it stores, its success is increasingly dependent on the kinds of operational data it produces and how this data is used to control and respond to issues. Having partial cooling or power metrics will not complete the “big picture” of facility health necessary to empower managers and operators to avoid downtime, maintain optimal conditions, and automatically adjust operation to increase efficiency.
In my experience, monitoring both power and cooling data together offers much more value than monitoring either one alone. Working in the field of data center efficiency, my team is consistently surprised to see that far too many data centers still have four to five different monitoring or automation systems running concurrently, each independently handling a different critical system (generators, HVAC, power distribution, server monitoring, temperature monitoring, etc.). Unsurprisingly, because of this uncoordinated setup, data center operators tend to ignore the relationship between power and cooling information from these systems. They in turn also lose the opportunity to understand how their many integrated systems, DCIM tools, and pieces of equipment are running in relation to each other.
Retaining reliability and familiarity with individual critical systems versus implementing a holistic improvement is one common reason why facilities aren’t incorporating single-system solutions for keeping track of their energy data. The specific reasons are usually:
Data center owners don’t necessarily know what they need to monitor to get to a new, higher level of integrated information and data analysis that can create effective efficiency changes.
Those facility owners who do know what kind of information and data would be useful are often concerned that installation of the actual monitoring devices will create unacceptable risks for downtime by either requiring an extensive maintenance window to power down critical equipment or by inadvertently causing an unplanned shutdown during project installation.
Since this level of data mining and analysis is beyond industry best practices, review of the importance of gathering data and a detailed explanation about how the process can work should educate data center stakeholders. It will also illustrate how this kind of control system will dramatically improve facility efficiency and lower operational costs.
Case Study
Recently, the efficiency firm that I work at completed a project at a nationwide colocation provider. Providing retail multi-tenant colocation services, its goal is to offer cost-effective but reliable space, power, and cooling for tenants. Bottom-line operational savings are achieved through lower power and cooling costs. We were brought in to formulate and implement efficiency measures to achieve goals of increasing efficiency to lower energy costs, increase space profitability and reclaim capacity on HVAC and power systems. To make these efforts successful, we established methods for monitoring both power and cooling data together within a single system.
During our initial evaluation, all CRAC units were running and server entry temperatures were colder than necessary in some areas but much warmer than ideal in other areas. The 100,000-square-foot facility had both slab and raised floor configurations. This strategy of precautionary cooling uses too much electricity and rarely distributes air effectively.
At our recommendation, the company first installed monitoring technology on all cooling and electrical equipment. Our team then installed low-cost airflow best practices including blanking panels, sealing up gaps in the racks and floor, and doors at the ends of aisles to better separate the hot and cold airstreams. Valves in key CRAC units were installed to stop flow when units were controlled off, allowing for reduction in water flow through the heat rejection system, reducing utilization of pump and cooling tower equipment and further increasing energy savings.
We also installed sensors to ensure that enough cool air was feeding the tenants’ servers, an inexpensive measure to ensure that cold air was effectively utilized. Air flow management, with its resulting information, is an important dataset that many facilities neglect.
Once installed, we utilized logic in the new monitoring system that made smart decisions regarding which CRAC units to turn off, based on server load. The overall temperatures became optimal throughout all areas. Additionally, many sections of the data center actually got colder even as the CRAC units were shut off because the ones left running were much more efficient. There was less strain on heat rejection equipment and backroom costs were lower. This is the type of result that we’re looking for. In the end, through this project alone, monitoring of both the electrical and cooling systems along with implementation of other measures, allowed this facility to drop PUE from 2.1 to 1.5, even while the site continued to add tenants and new space to the operation.
Too Much of a Good Thing
As the industry continues to move forward with IoT, data centers often have installed equipment with the ability to communicate operating and utilization data. Although “smart” equipment can gather information about itself, it doesn’t normally communicate with other equipment and rarely takes automatic steps to regulate usage based on other system information. As a result, operators have to sift through massive amounts of data from numerous pieces of equipment and try to find trends needed to make improvements. The key takeaway here is to implement a centralized system that can make sense of all the information and, ideally, offer recommendations or even take steps to make improvements.
Even with these advances, there still remains a great deal of “dumb” equipment that has yet to evolve. At the facility in the above case study, my team created a program of sensors and monitoring equipment for this kind of equipment, which was a vital piece in the puzzle as we lowered operational costs. Systems like these may take time and effort to design and install, but the savings are well worth waiting for.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like