Clustered Systems Cools 100kW in Single Rack
A new blade server chassis featuring technology from Clustered Systems is promising to cool computing loads of up to 100 kilowatts in a single cabinet. The system breaks new ground in the effort to pack massive computing power into smaller spaces.
December 12, 2011
clustered-20KW-16BladeEnclo
The new blade servers enclosure from Clustered Systems pack 16 blades into a 20 kilowatt chassis. Each blade is cooled with a cold plate, which contains a tubing system filled with liquid coolant.
A new blade server chassis featuring technology from Clustered Systems is promising to cool computing loads of up to 100 kilowatts in a single cabinet. The system, which breaks new ground in the effort to pack massive computing power into smaller spaces, will get its first test drive at the SLAC National Accelerator Laboratory in Palo Alto, Calif.
Average server racks in most data centers use between 4 kilowatts (kW) and 8 kW of power. Cloud computing and high-performance computing (HPC) centers feature denser infrastructures of 12kW to 20 kW and more. The new blade chassis promises to push the boundaries of high-density computing to 80kW to 100kW per rack.
Perhaps most intriguing: the system requires only a 480V power source, and a water supply, with no chillers and minimal cooling infrastructure.
"Changing the Data Center Dynamic"
"If we are successful, then the whole dynamic of data center deployment could change," said Phil Hughes, CEO and founder of Clustered Systems. "A user can put a system anywhere there is power. No special facilities are required. All investment can go into compute and not have to be shared with bricks and mortar."
The new blades build on Clustered Systems' success in a 2010 "chill-off" in which its technology proved more efficient than existing cooling products from major data center vendors.
They key to the system's density is a fanless cooling system using a cold plate, which contains a tubing system filled with liquid refrigerant. By removing fans and dedicating more power to processors, the Clustered Systems design can support unusual power densities.
The refrigerant system includes a pumping unit and heat exchanger, in which the refrigerant interacts with a water loop. In testing, the system has continued working with water temperatures as high as 78 degrees, meaning it can operate without a chiller, according to Hughes.
"It is expected that the initial deployment will be cooled with tower water or with return water from upstream legacy cooling systems," he said.
Consortium of Partners
In 2010 Clustered Systems partnered with Emerson Network Power on the Liebert XDS system, which used cold plates on server trays in a 1U rackmount design. The installation at SLAC adapts the technology for blade servers, which can be challenging to cool due to the way they concentrate processing and power consumption.
Each chassis takes up 8 rack units and includes 16 blades, each with two cold plates for heat removal, and a 20kW power distribution unit. Five of the 8U chassis can fit in a rack.
The blade server chassis was jointly developed by a group of companies including Clustered Systems, Intel, Emerson Network Power, Panduit, OSS (One Stop Systems, Inc.), Smart Modular and Inforce. The system development was funded by $3 million in grants from the U.S. Department of Energy and California Energy Commission.
"The efficiency of the Clustered Systems’ cooling system supports the greatest level of density and performance we’ve seen so far, and it has the legs to support several more product generations," said Dr. Stephen Wheat, Senior Director of Intel High Performance Computing.
System Overview
The cooling system uses Emerson Network Power’s Liebert XD pumped refrigerant cooling products. Emerson also designed and built the system rack, which features a NetSure DC power system which converts 480V AC power to 380V DC power.
The 380V DC will then pass to a Panduit unit in each enclosure which controls power delivery to each blade. "The concept of a power plane manufactured into the cabinet can be a source of improved efficiency in the data center," said Jack Tison CTO, Panduit, Inc. The 380V DC is then converted to 12V DC at the chassis level.
The dual CPU modules use PCI express as the system interconnect for its network, which was developed by One Stop Systems. "All blades in a system communicate with each other at 40Gb/s over PCI Express (PCIe), increasing the overall performance of the system,"said Stephen Cooper, CEO of OSS. "By utilizing the inherent functionality of PCIe over cable, we’ve designed switch blades and large 40-port switches that provide complete non-blocking communication at previously unheard of performance rates."
In the chassis, each blade houses two motherboards, each with two processors from the future Intel Xeon E5 family. The motherboards were designed by Inforce. The DIMM memory modules were designed as a cooperative effort between SMART Modular and Clustered. "These modules are a derivative of standard DIMMs and include an optimized heatsink design that creates an efficient and cost effective method to transfer heat from the DIMMs to the cold plate," said Mike Rubino SMART Modular's VP of Engineering.
First Deployment
The first two racks are scheduled to be installed at Stanford Linear Accelerator (SLAC) within the next few months. For the cooling system, SLAC will use cooling water exiting from existing IT equipment or directly from a cooling tower.
"We are very excited to be chosen as the first deployment site," said Norm Ringold, Head of IT Operations and Infrastructure, SLAC National Accelerator Laboratory. "The estimated 50 Teraflops per rack will add considerably to our compute capacity,"
Clustered Systems has not announced detailed pricing, but says it will be "highly competitive with other POD and container based systems." The company says a 3.2 megawatt data center using the new blade chassis could cost as little as $9.2 million, or about $3 million per megawatt of critical load. Industry experts new data center construction costs about $10 million to $15 million per megawatt on average, with hyper-scale projects like those at Google and Yahoo slashing that to $5 million per megawatt.
Hughes says the target market will begin on a slightly smaller scale.
"The ideal customer will have need for HPC but no data center space to house it," he said. "Typical customers could be academic department heads with money for hardware but not for infrastructure, or high frequency traders wanting to maximize crunch power in a very small allocated space. Longer term, we also expect to address cloud computing, which has much the same requirements as HPC."
clustered-csc_blade_iso1
A closer look at a single blade from the Clustered Systems chassis.
About the Author
You May Also Like