Google is planning for datacenter racks supporting 1 MW of IT hardware loads, plus the cooling infrastructure to cope, as AI processing continues to grow ever more energy intensive.

OK great, UK is building loads of AI datacenters. How are we going to power that?
READ MORE
At the Open Compute Project (OCP) Summit in Dublin, Google discussed changes in server room tech that it touts as being critical to AI’s continued ability to scale up, presumably to deliver ever larger and more complex models.
While the power consumption of a typical datacenter rack might fall somewhere between 5 kW to about 30 kW, the explosion in the use of servers stuffed with power-hungry GPU accelerators has seen this figure rise to 100 kW or more, with Nvidia’s DGX GB200 NVL72 system pushing 120 kW.
Now the cloud-and-search giant says that switching from the 48 volts direct current (VDC) power distribution previously championed by OCP to a +/-400 VDC system will allow those server rooms to support up to 1 MW per rack.
“This is about much more than simply increasing power delivery capacity – selecting 400 VDC as the nominal voltage allows us to leverage the supply chain established by electric vehicles (EVs), for greater economies of scale, more efficient manufacturing, and improved quality and scale,” Google says in a blog post authored by Principal Engineers Madhusudan Iyengar and Amber Huffman.
Also part of this vision is a disaggregation of the power components from the IT rack, and into a separate rack unit on the same row in a data hall. Google says this is a project known as Mt Diablo which it is working on with rival hyperscalers Meta and Microsoft, promising that a 0.5 draft release of the specifications will be available for industry perusal in May.
In practice, this will result in what the Chocolate Factory dubs a “sidecar” dedicated AC-to-DC power rack that feeds power to the other racks, the idea being to free up more space within each unit for servers stuffed with GPUs.
“Longer term, we are exploring directly distributing higher-voltage DC power within the datacenter and to the rack, for even greater power density and efficiency,” the Google authors claim.
The Mountain View biz also says it is developing a fifth generation of its cooling tech, previously deployed as part of the cloud infrastructure running its Tensor Processing Units (TPUs) to accelerate machine learning workloads.
Its implementation is based on in-row coolant distribution units (CDUs), backed by uninterruptible power supplies (UPS) for high availability.
The CDU supplies the server racks and is in turn connected to the data hall’s wider distribution loop. Coolant is ultimately delivered via flexible hoses to cold plates directly attached to the high-power chips – a system familiar to many high-performance compute (HPC) shops.
Google says its CDU architecture, named Project Deschutes, features redundant pump and heat exchanger units for greater reliability, and that this has allowed it to achieve a CDU availability of 99.999 percent since 2020.
The new kit is currently still in development, but the cloud-and-search firm says it will contribute the design to the OCP later this year, in a bid to help other companies adopt liquid cooling at scale. ®