Seeking Efficiency: The Data Center Holy Grail
Efficiency is a key factor that all data center managers seek, whether they are squeezing a bit of power by making server fans variable speed or they are making bigger differences at the power and cabinet level, it is often the “holy grail” for the data center industry. Two session leaders at the upcoming Data Center World event, speak to different aspects of driving efficiency in the data center.
January 29, 2015
Efficiency is a key factor that all data center managers seek, whether they are squeezing a bit of power by making server fans variable speed or they are making bigger differences at the power and cabinet level, it is often the “holy grail” for the data center industry.
All types of data center professionals -- from academic, government, enterprise or service provider sectors -- can relate to the relentless pursuit of more efficiency. As demands for reduced costs continue, one way to shave expensive power bills is to “do more with less” or become more efficient at doing the same amount of work and storing the same amount of data.
In advance of spring Data Center World Global Conference, Data Center Knowledge had the opportunity to discuss efficiency issues with two conference speakers, Scott Milliken, Computer Facility Manager, Oak Ridge National Lab (ORNL) and Chris Crosby, founder and CEO of Compass Datacenters.
ORNL is a multi-program science and technology national laboratory managed for the U.S. Department of Energy (DOE). Compass Datacenters is known for building natural disaster-resistant, Tier III-certified, LEED Gold, dedicated data centers where customers need them.
Moving Toward Efficiency
“I plan to talk about process, policy and design in the data center,” said Milliken, adding he will discuss what was changed to help the pain points at the lab. Although some of the world’s fastest supercomputers live at the Oak Ridge National Lab, they don’t put pressure on Milliken and his team. Rather the legacy commodity hardware that is supported in the same facility, taking 50 percent of the floor space, is what has challenged the data center’s progress toward becoming more energy efficient.
“When I came to ORNL five years ago, it had the number one supercomputer in the world. It was impressive. The supercomputer comes with engineering drawings where the piping goes, etc.,” he said. “It was a top-notch managed facility, with focus on supercomputing, but supercomputing is only half of our floor space. The rest of the facility has typical commodity equipment.”
Therein lies the rub. The ORNL had disparate commodity systems that were not managed overall. “We have commodity equipment owned by different people,” he added. “So it has organically grown, like the Wild, Wild West, with no standards or documentation.” He inherited a mix of commodity equipment owned by different people, some by administration, some by different departments, including research departments with their own workgroups and clusters. To achieve more efficiency, the data center team instituted standards, process and documentation.
Milliken explained that “Wild West” situation is a common phenomenon, but issues may be unique for each location. “Previously, we didn’t have an overall management strategy. We didn’t even have the same size cabinets. We had different cabinets, different power connections to everything,” he said.
Scott Milliken, Oak Ridge National Lab.
In terms of moving people away from a “Bring Your Own Equipment” approach to meeting specs called for a bit of strategy. “We said, if it meets our specs, then it’s free to host the equipment here. If it doesn’t there will be a cost. People ask, ‘What will it cost?’ Then they quickly ask, ‘What do I need to order?’”
Milliken added that because they are doing research or supporting researchers that they don’t necessarily have particular equipment requirements, they just need cycles of compute and space for storage.
“Our biggest pain point was electrical. That was the number one issue. There was a single source dependency so you had to schedule downtime,” he said. After electrical, the data center also had networking and cooling challenges to face. “We changed the point of view from an apartment manager to a business hotel manager. In a business hotel, you don’t bring your own furniture. You just bring your luggage and check in.”
Since making changes, the power usage effectiveness (PUE) measurement has been tremendously impacted, according to Milliken. “We have seen a 30 percent increase in efficiency. The new space the PUE is 1.12 or 1.13, compared with 1.4 or 1.5 in the old space. As we move more equipment to the new space, it gets more efficient.”
Cross Purposes: The Need to Come Out of Silos
Crosby, who is also giving two sessions at Data Center World, will present one session on efficiency, where he plans bring out the issues impacting increased efficiency, such as “siloed” thinking by different areas, such as IT and facilities.
“What the server industry has done around power efficiency has been the opposite of what’s been done on the data center side,” he said. “As we get more and more efficient on cooling, by raising temps and having a larger delta (difference between intake and outlet temps) this facilitates the mechanical cooling equipment being more efficient. At the same time, servers are now being designed with variable speed fans now, and there is less delta in the inlet and outlet temperature.”
These efforts which could defeat the other’s move toward efficiency are the “unintended consequences of operating in a vacuum,” said Crosby. “Neither understands what the other is doing.”
Crosby added, “I am taking the long view. Most data centers are using a mix of legacy equipment and new equipment. They are not following a technology fad.” Unlike the examples frequently cited of Facebook or eBay, not every data center has the benefit of a completely homogeneous environment.
Chris Crosby, founder and CEO, Compass Datacenters.
Data center managers have to be careful when applying the lessons of others, Crosby said. “They have to be sure the others are following the assumptions they are following.”
For example, in open hardware where there is no case on the server equipment, the assumptions at the beginning were specific to the facility and team of workers. “The origin of this was Facebook had such scale that it didn’t make sense for techs to remove all the cases to work on the servers,” Crosby noted. “It was not about server efficiency at all.”
In another session, Crosby is going to discuss the worker safety implications of high voltage electrical equipment. The issue of arc flash, or electrical explosions which can result in damage, injury or death, is a serious issue for the data center industry. For more details, see this DCK post - Crosby: Weak Commissioning Poses Risks to Reliability, Safety
To learn more about designing for, and increasing, efficiencies in the data center, attend the sessions by Crobsy and Milliken at spring Data Center World Global Conference in Las Vegas. Learn more and register at the Data Center World website.
About the Author
You May Also Like