Applying the Scientific Method in Data Center Management
Without experimentation, there’s no progress, and it doesn’t have to be expensive
March 9, 2016
Data center management isn’t easy. Computing deployments change daily, airflows are complicated, misplaced incentives cause behaviors that are at odds with growing company profits, and most enterprise data centers lag far behind their cloud-based peers in utilization and total cost of ownership.
One reason why big inefficiencies persist in enterprise data centers is inattention to what I call the three pillars of modern data center management: tracking (measurement and inventory control), developing good procedures, and understanding physical principles and engineering constraints.
Another is that senior management is often unaware of the scope of these problems. For example, a recent study I conducted in collaboration with Anthesis and TSO Logic showed that 30 percent of servers included in our data set were comatose: using electricity but delivering no useful information services. The result is tens of billions of dollars of wasted capital in enterprise data centers around the world, a result that should alarm any C-level executive. But little progress has been made on comatose servers since the problem first surfaced years ago as the target of the Uptime Institute’s server roundup.
Read more: $30B Worth of Idle Servers Sit in Data Centers
One antidote to these problems is to bring the scientific method to data center management. That means creating hypotheses, experimenting to test them, and changing operational strategies accordingly, in an endless cycle of continuous improvement. Doing so isn’t always easy in the data center, because deploying equipment is expensive, and experimentation can be risky.
Is there a way to experiment at low risk and modest cost in data centers? Why yes, there is. As I’ve discussed elsewhere, calibrated models of the data center can be used to test the effects of different software deployments on airflow, temperatures, reliability, electricity use, and data center capacity. In fact, using such models is the only accurate way to assess the effects of potential changes in data center configuration on the things operators care about, because the systems are so complex.
Sign up for Jonathan Koomey’s online course, Modernizing Enterprise Data Centers for Fun and Profit. More details below.
Recently, scientists at the State University of New York at Binghamton created a calibrated model of a 41-rack data center to test how accurately one type of software (6SigmaDC) could predict temperatures in that facility and to create a test bed for future experiments. The scientists can configure the data center easily, without fear of disrupting mission critical operations, because the setup is solely for testing. They can also run different workloads to see how those might affect energy use or reliability in the facility.
Read more: Three Ways to Get a Better Data Center Model
Most enterprise data centers don’t have such flexibility, but they can cordon off sections of their facility as a test bed, as long as they have sufficient scale. For most enterprises, such direct experimentation is impractical. What almost all of them can do is create a calibrated model of their facility and run the experiments in software.
What the Binghamton work shows is that experimenting in code is cheaper, easier, and less risky than deploying physical hardware, and just about as accurate (as long as the model is properly calibrated). In their initial test setup, they reliably predicted temperatures with just a couple of outliers for each rack, and those results could no doubt be improved with further calibration. They were able to identify the physical reasons for the differences between modeling results and measurements, and once identified, the path to a better and more accurate model is clear.
We need more testing labs of this kind, applied to all modeling software used in data center management, to assess accuracy and improve best practices, but the high-level lesson is clear: enterprise data centers should use software to improve their operational performance, and the Binghamton work shows the way forward. IT is transforming the rest of the economy, why not use it to transform IT itself?
Sign up here for his upcoming online course, called Modernizing Enterprise Data Centers for Fun and Profit, which is starting May 2.
The course teaches you how to turn your data centers into cost-reducing profit centers. It provides a road map for businesses to improve the business performance of information technology (IT) assets, drawing upon real-world experiences from industry-leading companies like eBay and Google. For firms just beginning this journey, it describes concrete steps to get started down the path of higher efficiency, improved business agility, and increased profits from IT.
About the Author
You May Also Like