Insight and analysis on the data center space from industry thought leaders.
The Cost-Effective Way to Increase Data Center Capacity
One technique combines intelligent software with specialized power-control hardware, turning power into a resource that can be pooled dynamically across the whole data center.
October 22, 2018
Mark Adams is Senior Vice President of CUI.
Conventional power architectures can prevent large data centers from growing to meet demand while also maintaining redundancy and availability. This article explains how combining policy-based power management and dedicated power-control hardware can help them achieve both.
Smarter Management of Redundant Power
Global compute capability is changing: small, medium and large enterprises are shifting workloads off their own infrastructure and into the cloud, attracted by the OpEx cost model, flexibility and near-limitless room to grow. Consumers, too, are making increased use of the cloud (sometimes without realizing it), storing everything from emails to documents, photos and health data.
With growing worldwide demand, the vast networked data centers that constitute the ‘cloud’ are being pushed to their limits. Many operators are continually juggling resources to ensure they can provide for every customer’s needs.
The Cloud Power Challenge
Often, the scarcest resource is not server or storage capacity, which can be purchased relatively easily, but power. Increasing the amount of power in a data center can involve complex, expensive and time-consuming infrastructure upgrades.
Data center operators are looking to optimize the way they use power. More-efficient cooling and humidity-control systems are appearing, using outdoor air and rainwater, rather than traditional air conditioning. Modern servers are also more efficient, with low idle power consumption. As a result, large data centers can now achieve power usage effectiveness (PUE) of better than 1.2.
In the specific case of 2N data centers, there is a new way to get more from the existing power topology, thereby reliably creating headroom for additional racks (and hence revenue). To understand it, let us look briefly at 2N redundancy.
2N Redundancy in Data Centers
A traditional 2N data center utilizes a pair of uninterruptible power supplies (UPSes) (Figure 1). Each UPS must be capable of powering all the workloads in the data center on its own. However, for much of the time, both UPSes will be operational, meaning neither runs at more than 50% capacity. Hence up to 50% of the remaining capacity on both UPSes is only called upon very rarely – in emergencies or during planned maintenance.
2N redundancy also assumes that everything in the data center is mission-critical, and that it is mission-critical 24/7. In reality, this usually is not the case: test, development and other non-production environments, for example, do not generally require high-availability, or even to be running at all times. Similarly, production systems might only require high-availability at certain times.
Providing full 2N power redundancy all the time and across the board, as part of a one-size-fits-all approach, prevents significant portions of the data center’s power from being used elsewhere.
Consequently, even if there is the physical space to install additional servers, a data center may not be able to power them.
Smarter Power Management
A pioneering technique combines intelligent software with specialized power-control hardware and now turns power into a resource that can be pooled dynamically across the whole data center.
For operators, this is incredibly significant. They can reliably tap into power that was previously locked away for redundancy purposes, thereby creating headroom for additional, non-critical workloads. Crucially, they can do this without compromising the availability of mission-critical 2N workloads, even when one UPS is unavailable.
Here's How it Works
The SDP software collects data from the power-control hardware in each rack every second. It processes this data using predictive analytics and machine learning and, with its holistic view of the data center’s overall power requirements, sends out device-specific power policies for each control unit.
These policies are sent to the control hardware every 10 seconds and contain instructions on what to do if one of the UPSes becomes unavailable. Should this happen, power-control hardware will automatically act to ensure the 2N racks remain operational, and the non-critical ones are shut down. This shutdown can either be immediate, or following a pre-defined hold-up period, to enable the workloads to be closed or migrated correctly.
Peak-shaving and dynamic redundancy are two of the most powerful techniques used.
Peak-shaving
Peak-shaving allows for the charging of batteries during times of low power usage, and draws on them to power peak loads. In this context, peak-shaving can protect the second UPS when the first is out of action, by ensuring the former is never pushed beyond capacity. Peak-shaving temporarily provides extra power to the system, either to allow for initial hold-up time, or to cover short-term increases in demand from the 2N racks.
Dynamic Redundancy
Dynamic redundancy tackles the assumption that everything in the data center requires high-availability, by differentiating between critical (2N) and non-critical workloads. With dynamic redundancy, when both UPSes are operational, a large portion of the redundant capacity can be made available for non-critical purposes.
In this situation, as soon as one UPS becomes unavailable, the power-control hardware looks at the latest policy it has received from the control software and acts to ensure the 2N workloads remain active.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.
About the Author
You May Also Like