GIGABYTE Server Shows Two-Phase Immersion Liquid Cooling on a 2U GPU G250-S88 using 3M Novec
by Ian Cutress on January 17, 2017 10:30 AM EST- Posted in
- Gigabyte
- Enterprise
- Trade Shows
- cooling
- Servers
- CES 2017
- 3M
In the land of immersed systems, there are many ways of doing things. A few intrepid users have gone with oil, still using a CPU cooler but relying on the liquid cycling throughout the system to remove heat energy. Going back over a decade and a half, I recall a system whereby a 35W processor was immersed, without a heatsink, into a bath of a 3M hydrocarbon with a modest boiling point, around 45C, which was then combined in a sealed system with an external thermal electric cooler to initiate the recycling. The demonstration by GIGABYTE at CES this year with a fully-embedded system is more the latter than the former.
Obviously you cannot use water (unless exceptionally pure/distilled) for conductivity reasons, so some inert hydrocarbon is the purpose here. The concept for this design is a two-phase change from liquid to vapor and back, using no pumps but relying on the fact that the gas will condense and fall back into the solution and sink, causing automatic cycling.
As I mentioned before, previously I had only seen this on a small low powered system, but GIGABYTE and 3M had submerged a full 8-GPU, dual CPU system with 24 memory modules and nothing more than large copper heatsinks on the CPU/GPU, and had even removed the power delivery heatsinks.
To cool the vapor as it rises through the system, a cold radiator is placed inside the sealed system. Well, I say sealed, but during the demo it was being opened and the demonstrator was clearly putting his hand inside. There seemed to also be a system in place to add/remove hydrocarbon material through a pump as well.
So the point in all this is more efficient cooling – no need for massive air conditioning units in a data center, no need to pump chilled water into water blocks. I’m surprised that this system was suitable for all that hardware, but it does leave on issue on the table: getting access to replacing hardware. Moving from air to liquid cooling in a data-center always has this issue.
So to keep things under wraps, 3M's Novec line of liquids involve a full array of halogenated hydrocarbon compounds for different uses, and the variant of Novec that is under use here was not specified. However a quick search turns up a likely candidate in Novec 72DA.
Novec 72DA liquid is a solution of 70% 1,2-trans-dichloroethylene, 4-16% ethyl nonafluorobutyl ether, 4-6% ethyl nonafluoroisobutyl ether and trace other similar methyl variants. The liquid has a boiling point of 45ºC at very low viscosity (0.4 cP, compared to 0.89 cP for water), but also a low specific heat capacity (1.33 J/g/K, compared to 4.184 for water). Typically water cooling (with blocks) with the high heat capacity is preferred, but at 1.33 J/g/K for the main ingredient in Novec is interesting: take a CPU that uses 140W, and in 60 seconds it will change 8.4 kJ of energy from electricity to heat. That would raise one kg of liquid (0.8 liters, due to 1.257 kg per liter for density) up by 7.24ºC. Thus it would take around 3 minutes from a slightly chilled start to create one kg of the main component of Novec to boiling point. If we add in the latent heat of vaporization, or the energy it takes to transform a chemical from a liquid at boiling point to vapor, then we need another 350 kJ/kg, or 41.67 minutes.
Now obviously in such a system it doesn’t work on pure kilograms of chemical – energy is transferred at larger doses on smaller amounts of liquid at once, causing the effect we see in the photos.
28 Comments
View All Comments
JoeyJoJo123 - Tuesday, January 17, 2017 - link
Looks neat, seems impractical for actual data centers.Billy Tallis - Tuesday, January 17, 2017 - link
It is fairly impractical as demoed here, because the 3M Novec fluid is way too expensive. But that's why 3M partnered with Gigabyte for this: they can build whole servers with non-standard form factors optimized to maximize density and minimize the empty volume that needs to be filled with the coolant.They make a fairly convincing claim that immersion cooling leads to much lower component failure rates due to everything in the system being kept well below unsafe temperatures and not experiencing large temperature swings. That should at least partially offset the maintainability challenges of immersion cooling.
I would like to see a rack-scale demo of this, both to see how they would remove the heat generated by so many servers when the vapor is only ~45-50ºC, and how they handle power delivery to such a dense cluster.
wumpus - Tuesday, January 17, 2017 - link
I'd assume you would have a water/"3m fluid" heat exchanger. I'd also wonder how effective it would be to have plastic inserts that would reduce the volume of expensive fluid needed and maintain flow. The other question would be what fluid would be flowing through the heat exchangers. I'd guess something like highly diluted anti-freeze (to prevent fouling) and then further heat exchanged with water pulled from a river/lake.Personally, I'd rather skip all these steps and just use "all in one" units (with lengthened tubes and water/water heat exchangers instead of water/air, but there's a lot of heat that you will miss. Containing/making irrelevant leaks (such as this fluid) will be the key.
petuma - Friday, February 3, 2017 - link
You can see the technology at 40MW facility scale here:https://www.youtube.com/watch?v=t8dj1LYw50g
29a - Monday, June 18, 2018 - link
The video is blocked in the US you'll need to use a VPN to watch it.Samus - Tuesday, January 17, 2017 - link
Potentially too unreliable/unproven for a data center application, as well. Super computer, or something else experimental, perhaps...LordOfTheBoired - Monday, January 30, 2017 - link
Hey, if immersion cooling was good enough for Cray...In fairness, Cray didn't use boiling coolant. And that boiling is actually ingenious, as it ensures all the hardware is maintained at or below the boiling point of the bath, and that coolant circulates across all hot components pumplessly.
koaschten - Wednesday, January 18, 2017 - link
Well not if you start thinking 90° turned. Like hanging Servers vertically into an aquarium.http://www.grcooling.com/carnotjet/
Guspaz - Tuesday, January 17, 2017 - link
IIRC, OVH has built some of the largest datacenters in the world, and done so without using any air conditioning. They have datacenter-wide watercooling loops that are cooled with big fans.Ej24 - Tuesday, January 17, 2017 - link
I think thats probably more like most industrial scale air conditioners which use a giant evaporative chiller tower to cool water in a loop that cycles through radiators through which air is pushed. Most large businesses use them as they're much more efficient at the large scale than heat pumps utilizing refrigerant. For example I work at a hospital that has 4 large chiller towers to provide chilled water for air conditioning. I imagine a datacenter is the same.