
Microsoft announced yesterday that it has been testing a new chip cooling technology based on microfluidics, which it says removes heat up to three times more effectively than today’s data center liquid cooling standard, cold plates that are placed on top of chips. The approach channels liquid coolant directly into the processor through microscopic, vein-like channels etched into the silicon and dynamically adapts to each chip’s heat patterns.
According to Microsoft, the method reduces peak GPU temperatures by as much as 65%, a breakthrough that could significantly lower the cooling and electricity requirements of data centers—where thermal management alone accounts for roughly 40% of total energy use. It could also enable denser, higher-performance server designs, delivering the same workloads with fewer machines.
Such advances could disrupt cooling providers (Vertiv, AVC, Delta Electronics…) and ripple through the broader power supply chain with some power providers for data centers (Vistra, Bloom…) feeling the heat yesterday. That said, we would highlight two important caveats.
First, Microsoft’s chip cooling technology is a prototype and the testing has been limited to a single Intel Xeon chip, with no details about metrics that are critical for data centers: system reliability (risk of leakage), integration within servers/racks, cost. Also, we would note that Microsoft compared microfluidics to cold plates which are the current standard, but did not make a comparison with upcoming immersion cooling technology which is much more efficient than cold plates.
For adoption, Microsoft will need support from major AI chipmakers (Nvidia, AMD, Broadcom, etc.), and even in a best-case scenario, widespread adoption is unlikely before the beginning of the next decade in our view. Assuming major chipmakers endorse microfluidics within the next two years, integrating it into both chip and server designs would take several more, making it unlikely to become a mainstream data center cooling technology before 2030.
Second, we believe the impact on the power needs of data centers and the electric supply chain would be limited. While cooling efficiency would improve, the power consumption of each generation of chips is currently growing ~25% annually, and Microsoft acknowledges that cooling breakthroughs will likely enable even more power-dense designs, suggesting that the electric consumption of a single chip will grow at a faster pace going forward. Combined with announcements of massive AI infrastructure expansions (e.g., Meta’s titan clusters, millions of Nvidia GPUs in the OpenAI – Nvidia partnership), the power consumption of data centers is unlikely to slow down soon.






