Data Center Optimization: How to Do More Without More Money
Operating efficiently and effectively in the land of "more" without more money means optimization at all levels: hardware, software, and even policies and procedures.
June 2, 2017
Data centers are pushing the boundaries of the possible, using new paradigms to operate efficiently in an environment that continually demands more power, more storage, more compute capacity… more everything. Operating efficiently and effectively in the land of "more" without more money requires increased data center optimization at all levels, including hardware and software, and even policies and procedures.
The Existing Environment
Although cloud computing, virtualization and hosted data centers are popular, most organizations still have at least part of their compute capacity in-house. According to a 451 Research survey of 1,200 IT professionals, 83 percent of North American enterprises maintain their own data centers. Only 17 percent have moved all IT operations to the cloud, and 49 percent use a hybrid model that integrates cloud or colocation hosts into their data center operations.
The same study says most data center budgets have remained stable, although the heavily regulated healthcare and finance sectors are increasing funding throughout data center operations. Among enterprises with growing budgets, most are investing in upgrades or retrofits to enable data center optimization and to support increased density.
At the same time, server density has increased. Since the mid-1990s when the IBM AS/400 mini-computers were popular and many of today's data centers were designed, server density has increased by 84-fold. Power needs have increased from about 100 watts per square foot for many legacy computers, to about 600 watts for cutting-edge blade servers. As server density increases and the data center footprint shrinks, any gains may be taken up by the additional air handling and power equipment, including uninterruptable power supplies and power generators. In fact, data center energy usage is expected to increase by 81 percent by 2020, according to CIO magazine.
Contracts and Procedures
In order to operate in such an environment, savings may come from a variety of sources. For example, the Natural Resources Defense Council recommends that data centers "review their internal organizational structure and external contractual arrangements and ensure that incentives are aligned to provide financial rewards for efficiency best practices.”
John Miecielica, produce management principal for data center optimization specialist TeamQuest, advises managers to look at risk and efficiency when evaluating contractual relationships. "External agreements are about risks, such as ensuring you have the capacity to meet service level agreements. Review them periodically to ensure they remain efficient.
"For example, when Lady Gaga promoted her single on Amazon in 2011, it crashed the servers. She had to halt the promotion until Amazon added capacity. As another example, when Healthcare.gov went live in 2013, the system crashed and was down for six months," Miecielica recalls.
Right-Sizing
Often, identifying and decommissioning unused servers during a data center optimization project is a challenge, along with right-sizing provisioning.
Virtualization makes it easy to spin up resources as needed, but it also makes tracking those resources harder. The result is that unused servers may be running because no one is certain they're not being used. A study by the Natural Resources Defense Council and Anthesis reports that up to 30 percent of servers are unused, but still running.
Likewise, a system may be provisioned with four CPUs but is really only using two. Such situations tie up compute capacity that may be needed by other machines, Miecielica explains. "Right-size your environment. Whether it's physical or virtual is irrelevant," Miecielica says. "Evaluate the risk of running out of capacity, provisions to meet that risk and resources that may be repurposed to avoid that risk."
Along with right-sizing hardware, Miecielica also advises scrutinizing applications to ensure they're written efficiently. One company, for example, habitually upgraded its hardware but found it could delay those upgrades by optimizing the applications.
A similar principle extends to storage. While data deduplication (removing duplicate files) is widely used, over-crowded storage remains an issue for small to medium-sized enterprises (SMEs). Deduplication can free much-needed storage space. Miecielica says it is one of the top two issues (along with security) for SMEs.
Monitor Everything
Another major undertaking managers should consider when going through data center optimization is to institute robust monitoring systems for infrastructure and cloud computing.
Data center infrastructure management (DCIM) systems, for example, enable management decisions to be made based upon actual usage rather than on manufacturers' specifications.
In addition to monitoring, managers also need analytics in place to accurately predict and resolve problems. "DCIM and server monitoring, coupled with analytics that link the two, can be very powerful," Miecielica says. The analytics help managers see, for instance, that moving workflow from X to Y can improve efficiency, but that moving it from X to Z can be even more efficient."
Rather than looking at the data center only as a collective of individual systems to be optimized, Miecielica advises also looking at the data center holistically. "Systems don't operate in isolation. They are part of a comprehensive package." As such, synergisms can be identified that may yield additional data center optimization opportunities.
Data center optimization, clearly, extends beyond hardware to become a system-wide activity. It is the key to providing more power, more capacity and more storage without requiring more money.
About the Author
You May Also Like