Insight and analysis on the data center space from industry thought leaders.

Optimizing Data Center Capital Costs with Intelligent IT

While much of the capital cost of a data center is in the IT hardware, the infrastructure alone can cost from $10M to $20M per megawatt, depending on the design and construction of the data center, writes Winston Saunders of Intel. A significant part of that cost (10-20 percent) is the back-up generators, which are put in place as a last line of defense against power failure. The column discusses how power management and data center management tools can be used to reduce the need for generators, thus reducing capital costs.

Industry Perspectives

April 20, 2012

4 Min Read
DataCenterKnowledge logo in a gray background | DataCenterKnowledge

Winston Saunders has worked at Intel for nearly two decades in both manufacturing and product roles. He has worked in his current role leading server and data center efficiency initiatives since 2006. Winston is a graduate of UC Berkeley and the University of Washington. You can find him online at “Winston on Energy” on Twitter.

Winston Saunders

WSaunders

WINSTON SAUNDERS
Intel

One attraction of Power Usage Effectiveness (PUE) is the clear distinction it draws between productive power use (IT) versus non-productive overhead (e.g. non-IT) uses.

We can extend the idea to other elements in the data center, like capital costs. While much of the capital cost of a data center is in the IT hardware, the infrastructure alone can cost from $10M to $20M per megawatt, depending on the design and construction of the data center. A significant part of that cost (10-20%) is the back-up generators, which are put in place as a last line of defense against power failure. It is capital that in the best case is never used!

Savings from Reduced Generator Capacity

Some companies have already started to innovate outside the data center infrastructure box when it comes to this cost. For example Facebook’s recently announced data center in Sweden will forego about 70 percent of back-up generator capability and instead rely on the very high electrical grid reliability available in their locality. No small breakthrough! Back-up generators can cost from $1M to $2M per megawatt to install and need to be tested regularly, adding ongoing operational costs.

The rationale of getting rid of back-up generators makes sense. In the line of the PUE reasoning above, money spent on generators doesn’t directly contribute to the intended useful work of the data center. It is an ineffective expense in terms of the data center’s purpose.

A colleague here at Intel asked me the other day whether Intel’s Intelligent Power Node Manager technology could be used to help save on back-up generator capital costs, and if, so, what would the magnitude of the benefit be. Interesting questions!

What Node Manager does, in colloquial terms, is to coordinate system level power to maximize system efficiency (performance) within a defined power envelope. In recent system measurements conducted here, servers with a maximum power of 402 Watts were able to be reduced to a power level of less than 120 Watts, representing a dynamic range of over 3.3:1.

Examples of Savings

Now imagine you have a 10 megawatt data center with a PUE of 1.5. The PUE implies 66% of the load, or 6.6 megawatts, is IT gear. If 10% of the IT load is storage and network (as a typical number), then the server load is 6.0 megawatts.

In the figure below, I illustrate a usage for the Node Manager in a scenario to reduce back-up generators by 40 percent. By using it to throttle down server power and performance intelligently during a power events instead of relying on full power backup capability, capital costs could be reduced by about $4 million, and perhaps more if the power use of the  network and non-IT loads reduce indirectly because of the server load reduction.

intel-Power-Node-Manager

Graphic courtesy of Intel.

What interested me most about this is the value of the power reduction. In this estimate the capital reduction for the data center translates to almost $400 per server.

Now obviously there is no free lunch, you do need servers that are equipped with Node Manager and a capable data center management console. In addition, performance is sacrificed in limiting a data center’s power budget during a power event.  But, what is most important is the data center would not crash, and, if revenue or business impacts could be mitigated with planning, this might be an option for the few times back-up generation is really needed.

Freeing Up Capital to Meet Business Needs

Perhaps, for instance, the money saved in buying generators could be spent to beef up server inventories or networks to other corporate data centers to increase both resiliency and overall performance. Freeing up capital from contingency planning to mainly business need is what effectiveness is all about.

Of course, this reduction of 40 percent is not quite as good as the folks at Facebook have done with their ultra-reliable grid in Sweden. On the other hand, this idea builds on that innovation and can be implemented essentially anywhere.

What both examples point to the usefulness of thinking “outside the traditional data center infrastructure box.” In many ways we are just beginning to see the level of innovation in continued reduction of capital infrastructure costs for huge data centers.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like