When it comes to debunking nonsense numbers, you can’t do better than the BBC’s More or Less radio programme and podcast.
The team includes the economist and writer Tim Harford. They regularly take politicians and companies to task for crimes against statistics and address issues related to the accurate representation of data.
So it’s not surprising then that the More or Less team recently turned their analytical guns on the misinformation that is flowing (pun intended) around data center water consumption. (Note: the incidents highlighted in the episode actually date back to late 2025 or early 2026. The issue is obviously still very much alive and relevant).
The episode focused around a book called Empire of AI which stated that AI demand could drive up consumption of fresh water to 1.1 trillion to 17 trillion gallons (4 to 6 trillion litres) of fresh water per year by 2027. This amount is apparently equivalent to half the water consumed in the UK.
Consumption vs withdrawal
However, as More or Less explained, when it comes to complex topics such as AI and water usage, definitions matter. It seems that Empire of AI potentially conflated water consumption with withdrawal. The 1.1 to 1.7 trillion gallon figure was sourced from a University of California, Riverside study that was actually for water withdrawal rather than water consumption. There is an important difference:
+ Water withdrawal refers to the amount of water that gets taken out of system or source. Importantly, some of that water will be consumed but some of it will also be returned. (Measures demand)
+ Water consumption is a sub-set of withdrawal and refers to water that is taken out of the water system and importantly not returned. (Measures loss)
The discrepancy was highlighted by independent researcher Andy Masley who published a detailed analysis of the claims and potential discrepancies in his substack which is well worth reading for more depth on this topic. The author of the Empire of AI acknowledged the mistake and issued a correction. The correction also contained links to some other important sources on the topic.
However, according to the More or Less team, the story does not end there. The University of California, Riverside paper also has some potential issues, they claim. The water usage numbers in the paper are apparently based on an estimate for global electricity use in 2027. However, it seems the author of those numbers, Alex de Vries-Gao, was only basing his analysis on servers that could be deployed in 2027 and excluded all existing infrastructure.
So now you have an overestimate intertwined with a potential underestimate!
Some further analysis by Vries-Gao to estimate total AI server capacity, and eventual water consumption, postulated that AI systems at the end of 2025 were consuming water at a rate of up to 750 billion litres of water, according to More or Less.
Is that a big number?
As usual, the programme likes to always ask the question: “Is that a big number?”
The answer seems to be yes – in fact it could exceed the total global consumption of bottled water which is more than 450 billion litres, according to Vries-Gao. However, another important caveat is only about 10 percent of that consumption is happening on-site at data centres, the rest is mostly happening at power stations.
Also the original consumption vs withdrawal issue is important to consider. While they are different, withdrawal is actually a more important consideration than actual consumption.
Power availability: the ultimate bottleneck
The sign-off to the episode unsurprisingly actually focused on power availability rather than water consumption or withdrawal. Accurate estimates on future water use depend on projecting installed AI infrastructure capacity accurately. Growing power availability constraints are making these projections increasingly difficult to predict accurately.
It’s also important to remember that the More or Less programme was created for a generalist audience and the issue of AI and Data Center water use has a huge number of other variables including direct liquid vs air-based cooling, silicon diversification (GPUs vs other chip types and the heat rejection implications), training vs inference (centralised vs distributed AI).
What is clear is that we have entered the third-wave of sustainable IT and this time is very different. The previous two cycles – in the early 2000s and 2010s – were largely driven by the industry responding to governmental scrutiny. This time it is much more grass-roots and bottom-up with community groups often leading the charge. The industry is responding. However, it remains to be seen if the response will match the speed and scale of the actual data center build outs.
