Why Microsoft Thinks Underwater Data Centers May Cost Less
It ’s not as wild an idea as it may appear. Project Natick lead Ben Cutler explained to us why he thinks so.
June 13, 2018
There is more to Microsoft’s sinking a data center the size of a shipping container off Scotland’s Orkney Islands than cheap power, free cooling, and being close to where half the world’s population lives. The big hope is that a truly lights-out data center could end up being cheaper to run and failing less often.
Building a submarine for servers might sound expensive, but if you think about the total cost of ownership, much of it is upfront, Ben Cutler, who manages the research project at Microsoft, told Data Center Knowledge in an interview. “We think the structure is potentially simpler and more uniform than we have for data centers today,” he said. It’s all still hypothetical, but “the expectation is there actually may be a cost advantage to this.”
Putting computers underwater is nothing new, and not only for Microsoft. Marine scientists have been doing it for a long time, but they haven’t done it at scale. The marine industry has plenty of experience building large structures, cooling ship engines, and dealing with the barnacles that accumulate on underwater surfaces. “What we’re doing is relatively modest compared to what people have been doing in the ocean for decades,” Cutler said. “We’re at relatively shallow depths. These [data centers] are relatively small things.”
The Supply Chain is Largely in Place
Microsoft can leverage its hardware expertise and supply chain to fill underwater data centers with commodity servers. In fact, the 12 racks and 864 servers and FPGA boards in the “Northern Isles data center” came out of a Microsoft Azure data center on dry land. The cylindrical enclosure was made by an established ocean engineering company, Naval Group, which can manufacture and ship them at volume if the idea becomes popular.
microsoft natick scotland racks inside_0
Timeframes and economics are very different from building data centers on land, Cutler said. “Instead of a construction project, it's a manufactured item; it's manufactured in a factory just like the computers we put inside it, and now we use the standard logistical supply chain to ship those anywhere.” Making it the size of a shipping container was deliberate: the data center was loaded onto a truck in France, which crossed the English Channel on a ferry, drove across the UK, and took another ferry to the Orkney Islands, where it was loaded onto a barge for deployment.
Being able to deploy faster doesn’t only mean expanding faster, it also means not spending money as far in advance. “It takes us in some cases 18 months or two years to build new data centers,” Cutler said. “Imagine if instead I just have these as standard stock, where I can rapidly get them anywhere in 90 days. Well, now my cost of capital is very different, because I don't have to be building things as far in advance as I do now. As long as we're in this mode where we have exponential growth of web services and consequently data centers, that's enormous leverage.”
Lights-Out Means Warranty Savings
Sending the data center down for five or ten years at a time before replacing all the servers with new ones isn’t much different from the lifecycle of many cloud servers, which might get repurposed for different services as newer hardware comes in. And making that five-year commitment translates to even more savings upfront.
“Warranty costs can be considerable,” Cutler noted. “In our case, maybe we'll send this back to the vendor, but not for a long time.” Those warranty costs are high because each of the component suppliers has to maintain an inventory of the components sold for the length of the product’s lifetime. Intel may put out a new chip every year, but it has to maintain an inventory of the ones it sells for five years, he explained. “In our case, that all goes away. We're going to drop it down there, and we're done. We're never going to come back to the vendor and say replace this disk drive, because by the time we might want to do that, it's five years later, and there's much better stuff out.”
Scale Large and Small Possible
While the Orkney data center is a single module, Cutler envisions connecting multiple modules together for scale, perhaps with a central node for connectivity and power distribution, treating them like rows of racks in a regular data center. Inter-container latency would be similar to latency in a large data center network. Some of the larger Azure data centers are a mile from end to end.
microsoft natick scotland on base_0
Each top-of-rack switch in a Microsoft data center connects to spine switches, which “cross-talk” across all the racks in the row. The next level up, the spine switches cross-connect with each other. “It can be considerable distance between those things. If we have many of these in the water, it wouldn’t be all that different.”
In an edge deployment, a single module could be ideal for an Azure Stack processing seismic data for exploratory oil and gas rigs, which are increasingly deployed on the seabed.
The Green-Cloud Potential
Once an underwater data center is up and running, power and cooling costs are low. The first, much smaller, version of Project Natick (Microsoft’s codename of the research effort) had a manifold with different valves to let the team experiment with different cooling strategies. The PUE was 1.07 (compared to 1.125 for Microsoft’s latest-generation data centers). Without the manifold, Cutler estimated, it would have been as low as 1.3.
Also, this time there’s no external heat exchanger. “We’re pulling raw sea water in through the heat exchangers in the back of the rack and back out again,” which he noted was the same technique used to cool ships and submarines. The speed of the water flow should discourage barnacle growth. This cooling system could cope with very high power densities, such as the ones required by GPU-packed servers used for heavy-duty high-performance computing and AI workloads.
The Northern Isles data center taps into the tidal generators of the European Marine Energy Centre. Microsoft envisions ocean data centers colocating with these off-shore energy sources. But future versions could also have their own power generation.
“Tide is a reliable, predictable sort of a thing; we know when it’s going to happen,” Cutler said. “Imagine we have tidal energy, we have battery storage, so you can get a smooth roll across the full 24-hour cycle and the whole lunar cycle.”
Instead of backup generators and rooms full of batteries, Microsoft could overprovision the tidal generation capacity to ensure reliability (13 tidal turbines instead of 10, for example). “You end up with a simpler system that’s purely renewable and has the smallest footprint possible.”
No Added Strain on Local Resources
The relatively small environmental impact of using the ocean for both power and cooling makes it easier to deploy data centers in more places.
Cooling in modern data centers relies more on water than it does on power. “There used to be unbelievable amounts of power used for cooling, and over time we ’ve been able to drive that down rather dramatically,” Cutler explained. “But water is used to enhance that cooling effect.”
As a result, a data center usually has a line into the city water supply, which is OK in many places in the developed world but not as much elsewhere. Cutler and his team like the idea of bringing a data center to a place somewhere in the developing world without adding any constraints on the local water or power supply. “There’s no pressure on the electric gird, no pressure on the water supply, but we bring the cloud.”
Humans and the Air They Breathe
The Project Natick team designed the underwater data center to operate without physical interference for up to five years. The benefits of this approach include lower operating costs – you don’t have to staff it – and lower failure rates – “There’s a lot of data showing that when people fix things they're also likely to cause some other problem,” Cutler said.
“A lot of the cost here is putting it on the seabed and then taking it off,” Cutler said. “It’s not the sort of thing where once a week we haul it up to the surface, we crack it open and replace things. It’s a fail-in-place model.”
Another reason Microsoft expects to see low failure rates is the cold and dry nitrogen atmosphere inside the data center. The cold-aisle temperature is currently 12C, expected to remain consistently cold enough to remove the stress temperature variations place on components. “With the nitrogen atmosphere, the lack of oxygen, and the removal of some of the moisture is to get us to a better place with corrosion, so the problems with connectors and the like we think should be less.”
The Project Natick team has built a land version of the capsule, both as a control measure and a way to see whether the research could uncover new options for traditional data centers. “This whole notion of lights-out, if I really can't touch the thing, does that make any difference in terms of failures?”
If protection from corrosion by way of filling the thing with nitrogen turns out to be a big advantage, for example, it could translate. “Air is nearly 70 percent nitrogen, so making nitrogen and filling a space is pretty inexpensive,” Cutler said. “So if that turned out to be a good thing to do, that's really easy to replicate on land.”
About the Author
You May Also Like