How Practical is Dunking Servers in Mineral Oil Exactly?

Submerging servers in dielectric fluid? Stock up on paper towels.

Scott Fulton III, Contributor

May 1, 2017

8 Min Read
How Practical is Dunking Servers in Mineral Oil Exactly?
Green Revolution Cooling’s oil submersion system for servers. [Courtesy GRC]

We’ve covered oil immersion cooling technology from Green Revolution Cooling (GRC) here on Data Center Knowledge since the turn of the decade.  It’s an astonishingly simple concept that somehow still leaves one reaching for a towel:  If you submerge your heat-producing racks in a substance that absorbs heat twelve hundred times better than air, then circulate that substance with a radically ordinary pump, you prolong the active life of your servers while protecting them from damage and erosion.

Still. . . e-e-ew.  Mineral oil?

“For safety, we keep paper towels nearby to wipe up any drips,” explained Alex McManis, an applications engineer with GRC, in an e-mail exchange with us.  “There are various products commonly used in industrial environments to place on the floor for absorbing oil and preventing slipping.”

Neither Grease Nor Lightning

All these years later, it’s still the kind of novel proposition you’d expect would generate plenty of anchor-banter at the tail end of local TV newscasts:  You take a 42U server rack, with the servers attached, you tip it sideways, and you submerge it in what cannot avoid looking like a convenience store ice cream freezer.  (The green LEDs and the GRC logo help, but only somewhat.)  In that containment unit, the racks are completely submerged in a non-flammable, dielectric mineral oil bath that GRC calls ElectroSafe.

There, the oil absorbs the heat from the fully operative servers, and pumps it out through a completely passive dry cooling tower.  Because the oil does not need to be cooled down as much as air to be effective, it can be warmer than the ambient air temperature (GRC suggests as high as 100° F / 38° C) and still fulfill its main task of absorbing heat.

Still, states GRC’s McManis, there’s no measurable degradation in the oil’s heat absorption capacity over time.

“The oil operates far below a temperature where degradation happens like in an engine,” he told us.  “We’ve tested the oil yearly and see zero changes; as far as we can tell, it lasts forever.  Forever is a long time, so we say a lifetime of 15 years.  The oil is continuously filtered, so the racks can be placed in rooms without air filtration, such as warehouses.”

The oil has the added benefit, McManis stated, of serving as a rust and corrosion preventative.  According to him, Intel detected no degradation to the working ability of components that it tested.

In a mid-April company blog post, Barcelona-based Port d’Informació Cientifica (PIC) claimed that over the last 18-month period since its installation the GRC system was able to reduce its total requirements by 50 percent.  PIC is the infrastructure provider for many of Europe’s scientific services, including Switzerland’s CERN, which operates the Large Hadron Collider.  PIC went on to report no failures in the cooling or server systems, and exclaimed all this was achieved without any use of water whatsoever.

As part of regular annual maintenance, McManis stated, PIC’s filters had to be replaced once, in a process that takes only a few minutes.

Water, Water Nowhere

Not all GRC installations are waterless; in cases where the system is installed in existing data centers, the company clearly states, it can use pre-existing water-based heat exchangers.  But recently, that’s changed.

“Our containerized data centers have evolved via co-designing with the U.S. Air Force,” wrote McManis.  “Besides testing component by component for reliability, we’ve switched to a water-free design using dry coolers.  There’s no intermediate water loop, so part count is reduced and water treatment is no longer needed.  The pumps and heat exchangers are underneath the walkway, freeing up space for more racks and electrical equipment, such as a flywheel.”

The elimination of water as a factor in data center equipment cooling may not be as obvious a breakthrough for a CIO or for DevOps personnel as for, say, a licensed HVAC technician.  Last July, we reported on a Berkeley National Lab study revealing that, while the generation of 1 kilowatt-hour of energy requires 7.6 liters of water on average, the average US data center consumes 1.8 liters of water in the cooling process for every 7.6 liters consumed.  That means a data center uses one-quarter as much water again to cool down from the use of any given unit of electrical power, as it did to generate that power in the first place.

A recent Schneider Electric white paper [PDF] demonstrated why the drive to increase server density is so critically important.  The amount of cubic feet per minute required to keep a system cool for every kilowatt consumed (CFM/kW) is reduced steadily as server density increases — to the extent that it may cost half as much to cool an ordinary 6 kW rack as it does a 3 kW rack.

The question of managing airflow and water consumption has become so critical that Hewlett Packard Enterprise has been experimenting with how it can optimize the distribution of software workloads among its servers.  A 2012 HP Labs project, in conjunction with Cal Tech, showed how climate data and capacity planning forecasts, along with cooling coefficients derived from their chilled water systems, integrated with their servers’ workload management software, could measurably reduce power consumption and the costs associated with it.

The HPE/Cal Tech team portrayed the goal of their research as seeking a “practical” approach, taking care to put the word in quotation marks.  One goal, the team wrote, is “to provide an integrated workload management system for data centers that takes advantage of the efficiency gains possible by shifting demand in a way that exploits time variations in electricity price, the availability of renewable energy, and the efficiency of cooling.”  Practicality, as this team perceives it, means accepting the existing boundaries, inarguable limitations, and everyday facts of data center architecture, and working within those boundaries.

Among these inviolable facts of everyday life in the everyday data center is airflow.

Typically, the refrigeration of air depends upon the refrigeration of water.  The GRC system literally flushes air out of the equation entirely.  In so doing, it can eliminate water as a factor in managing airflow, since there’s no airflow to be managed.  In the GRC company blog post, a representative of PIC’s IT team said it’s running its oil-submerged servers at nearly 50 kW per rack, without incident.

That’s an astonishing figure.

Usually, according to Schneider’s report, “As densities per rack increase from 15 kW and beyond, there are design complexities injected into the data center project that often outweigh the potential savings.”  The GRC system may not be a design complexity, though it certainly turns the whole design question on its side.  However, PIC is claiming the savings are measurable and worthwhile.

The Slickest Solution Out There

We wondered whether GRC can leverage the oil’s acute heat absorption capability as an indicator of relative server stress.

“We have enough sensors to measure the heat load going into the racks,” McManis responded.  “Current meters on the [power distribution units] are going to be more precise, but the heat capacity calculation is sufficient to check efficiency of pumps and heat exchangers.  We calculate if they are maintaining their rated capacity without requiring a 100 percent capacity test.  For example, we can remotely see percent efficiency loss of a heat exchanger to monitor scaling from less than perfect water treatment.  Without this monitoring, an inefficiency might not be found until the system couldn’t perform as requested.”

From a cost standpoint, the sacrifices a data center operator makes in practicality when implementing an oil immersion system such as GRC’s may seem within the margin of tolerability.  Our videos of GRC CarnoJet from 2013 made it look like system operators could get away with wearing tight gloves, perhaps hairnets, and keeping those paper towel rolls handy.

Let’s face it:  It can’t be easy to get a grip on an oily server.  Since those videos were produced, McManis told us, there have indeed been refinements to this process.

“The easiest way to remove a server is using an overhead lift with a specially made lifting hook that attaches to the server ears,” he wrote.  “The server is then laid down on service rails which drain the server back into the tank while it’s being serviced.  The server can be dripping while parts are being replaced.”

Over the past few years, he said, GRC has made “ergonomic improvements, such as lowering the racks, auto-draining service platforms, and using an integrated overhead hoist for lifting the servers.”

When parts are being replaced and sent back to their manufacturers, is there a way to ship them out without the recipients ending up with saggy boxes?  “Drip dry is clean enough for RMA,” McManis responded.  “An aerosolized electronics cleaner is the fastest for small items.  Using an electronics cleaning solution in an ultrasonic cleaner will restore to clean as new.”

It may not be the most aesthetically pleasing solution to the data center cooling problem ever devised.  But GRC’s oil immersion method is far from the most nose-wrinkling system put forth to the public: in 2011, an HP Sustainable Data Center engineer suggested data centers be built next to dairy farms, where 10,000 or more cows each could produce what’s called biogas.

So GRC can happily declare, “We’re not biogas.”  Yet with electricity costs worldwide continuing to rise and the scarcity of water becoming a reality for everyone, we may soon be in the position where we replace our water consumption models with projections for rolls of paper towels.

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like