CERN's New Data Center for the Large Hadron Collider 'beauty' Project
The modular 3MW facility will use free air cooling and is expected to run at PUE 1.1. It will serve a single project, which aims to find out what happened after the Big Bang.
January 12, 2019
The folks involved in CERN's Large Hadron Collider "beauty" project (LHCb) have already been testing the facility that will house a new High-Performance Computing data center near Geneva, well before racks and equipment get installed in February. Eventually, the data center will be used for some serious number crunching in the LHCb "experiment," which seeks to glean insight into what happened immediately after the Big Bang. Why did matter survive, and how was the universe created?
In case you don't know, the sixteen-mile-long underground Collider is not only the largest and most powerful particle collider in the world, it's also the largest machine on the planet. Basically, it allows scientists to observe what happens when some of the tiniest particles, such as protons, smash into each other at nearly the speed of light. In a typical run, these collisions happen at the rate of 2.1 billion times per second, each collision generating particles that can decay into even more particles.
When operational, the equipment in CERN's new data center will be used to process data from detectors inside the collider while LHCb experiments are underway. The purpose is to quickly reduce the amount of data that needs to be stored elsewhere. The center will have the capacity to store at least 30PB, but data will only remain in the facility for a week or two before being moved to another facility.
"This is only one out of five significant data centers we run at CERN," Niko Neufeld, deputy project leader at CERN, the organization that oversees the collider, told Data Center Knowledge. Three other "experiments" collect data from the collider besides LHCb, each of them with its own data center, he explained. There's also a central data center for processing data that's not coming from the accelerator directly -- things like physics analysis and other IT needs. The new LHCb data center "is now being built because of the next generation of experiments which require much more IT."
There's much that makes the new data center interesting, starting with the old data center it will be replacing.
Moving From a Cave to the Surface
The original data center for the project, which remains in use until the new system becomes fully operational (expected toward the end of the year), sits underground, 300 feet below surface. It was built there not because of some special technical needs of the collider but simply because there was space available in a facility used by a previous experiment.
"There was significant space, with power and cooling water available underground, so that was sort of there," Neufeld explained. "Even though it is inconvenient to run a data center underground, it was simply cheaper, because it required only some adaptations."
That relatively small, 1,615-square foot, 600kW facility, uses rear-door liquid cooling, taking in water chilled to 63F. One big drawback of being in the cave is the expense of piping heat away from the equipment, since there's no contact with outside air to dissipate it in.
The location's main advantage, according to Neufeld, is its proximity to the collider. Only 150 feet of cable is needed to transport the LHC data. The new data center, a much larger, 32,500-square foot, 3MW facility, will sit on the surface and require a fiber optic connection.
For cost and schedule reasons, Neufield's team decided to use a prefabricated data center. It's built out of six modules, each module housing a row of IT racks and supplying 500kW of cooling capacity using indirect free air cooling with adiabatic assist.
He said he expects the modular design to help keep the data center's operational cost to a minimum across the board. Cooling the modules, for example, is more efficient because there's no extra space to fill with cool air.
The data center will also be able to leverage the area's relatively mild summers to further cut cooling costs. Its free-cooling system will only require the adiabatic water cooling assist about 20-30 days per year. This will be enough to keep the data center's power usage effectiveness (PUE) to 1.1, which means that more than 90 percent of the power being consumed by the facility will be driving IT, he said.
If It Loses Utility Power, It Goes Dark
Perhaps one of the most interesting features of the data center is that it has no backup power -- not even backup batteries to keep the system running long enough to store any data in transit when a power outage occurs.
"The power distribution at CERN has been traditionally very reliable, so there have been very few major outages," Neufeld said. "I think one or two in the last five or six years that I remember."
He pointed out that the collider itself isn't on safe power either.
"When the accelerator is down, there is no input data, and there is nothing urgent to be processed," he said. "We have a small server room, not in this facility directly, where we have some critical infrastructure, like shared file systems and stuff like that, which doesn't like to go down. This is a very small room, with 80kW, which is on the battery and diesel backup."
The unsaved data that would be lost as a result of a power outage, "while unwelcome, is not an absolute disaster," Neufield said.
The modules that will be used to house the data center started arriving and being installed in November. After initial testing, equipment racks and servers will start being put in place in February, with the fiber optic cable expected to be connected in March. The data center is scheduled to become fully operational by the end of the year, after the final two modules are installed in October.
About the Author
You May Also Like