Companies Update Numbers on AI’s Environmental Impact

Ethan Mollick, Associate Professor at The Wharton School and author of Co-Intelligence, Living and Working with AI [Portfolio | April 2024], rounded up some reporting in September on the use of energy and water in operating large language models like ChatGPT.
Multiple sources of data have surfaced on environmental impacts per generative AI prompt.
Gemini reported .00024 kilowatt hours of power and .26 milliliters of water per prompt.
ChatGPT reported .0003 kilowatt hours and .38 milliliters of water.
“That is the same energy as one Google search in 2008 and the equivalent of 6 drops of water,” Mollick said. He added that Google reported a drop in energy use per prompt over the last year by a factor of 33.
He cautioned that while those numbers match some independent direct measures, they are smaller than what French artificial intelligence startup Mistral AI SAS reported for their older model per query, which was 50 milliliters of water and 1.14 grams of carbon emitted per average query.
A piece in The Verge took some of those estimates to task.
The site spoke to Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside, who said Google left out key data in its study, leaving the story of Gemini’s environmental impact with significant gaps.
One of those gaps is that Google omitted indirect water use in its estimates, focusing on water that data centers use in cooling systems to keep servers from overheating.
A more accurate accounting of environmental damage would include impacts on local water resources and carbon emissions that factor in the current mix of clean and dirty energy of the local power grid.
This would be an apt time to mention that Jon Ipolito, Professor for New Media at the University of Maine, developed a still-evolving interactive tool for estimating and comparing environmental impacts of AI.
Check it out here: What Uses More