- “Explained: Generative AI’s environmental impact.” MIT News, Adam Zewe, January 17, 2025.
- This explainer from MIT News outlines how generative AI models consume vast amounts of energy and water through training and inference.
- “We did the math on AI’s energy footprint. Here’s a story you haven’t heard.” MIT Technology Review, James O’Donnel and Casey Crownhart, May 20, 2025.
- A data-rich investigation quantifying the electricity use and carbon emissions of large AI systems. The article examines how tech giants’ growing computational demands are outpacing renewable-energy gains and what that means for climate goals.
- “What are scope 1, 2 and 3 carbon emissions?” National Grid, July 1, 2024.
- A factual explainer on the international framework for categorizing emissions. It clarifies how Scope 1 (direct), 2 (purchased energy), and 3 (value-chain) emissions work, which underpin sustainability reporting for data centers and AI operations.
- “Demystifying Data Center Scope 3 Carbon.” Data Center Dynamics, Paul Lin, July 26, 2023.
- Explores the indirect, often overlooked emissions created by data-center supply chains, construction, and hardware manufacturing. It argues that Scope 3 data is essential for truly understanding the climate footprint of digital infrastructure.
- “How much energy does Google’s AI use? We did the math.” Google Cloud Blog, Amin Vahdat and Jeff Dean, August 21, 2025.
- Google describes its methodology for calculating the energy, water, and carbon costs of running AI models during inference. This post contributes to ongoing industry efforts to standardize environmental reporting for cloud-based AI services.
- “Understanding the carbon footprint of AI and how to reduce it.” Carbon Direct, Julio Friedmann and Colin McCormick, November 19, 2024.
- This brief explains both operational and embodied emissions in AI systems, offering mitigation strategies such as efficiency, optimization, carbon removal, and transparent accounting.
- “What kind of environmental impacts are AI companies disclosing? (And can we compare them?)” Hugging Face, Sasha Luccioni and Theo Alves Da Costa, September 17, 2025.
- Reviews disclosure reports from leading AI firms, revealing inconsistent or incomplete reporting of water, energy, and emissions data.
- “Do we know how to measure AI’s environmental impact?” International Telecommunication Union (ITU), July 10, 2025.
- Summarizes the ITU’s efforts to develop global metrics for evaluating AI’s lifecycle footprint – from model training and deployment to hardware and data management.
- “AI’s Growing Carbon Footprint." Columbia Climate School – State of the Planet, Renee Cho, June 9, 2023.
- Discusses how the surge in AI computing could counteract broader climate-mitigation efforts if left unchecked.
- “Measuring and Standardizing AI’s Energy and Environmental Footprint to Accurately Access Impacts.” Federation of American Scientists (FAS), Mitul Jhaveri and Vijaykumar Palat, June 27, 2025.
- This policy brief calls for transparent, standardized reporting on AI energy use and broader environmental impacts. It proposes frameworks for federal agencies and research institutions to track AI-related emissions systematically.
Environmental Impact Hub
What does it cost to ask AI a question? Not in subscription fees, but in electricity, water, and rare earth minerals. As generative AI has moved from research labs into everyday use, these hidden costs have multiplied, but they remain largely invisible to the millions of people now using these tools daily.
AI technology has existed for decades, but generative AI (systems that create text, images, video, and code on demand) represents a fundamental shift in scale and accessibility. Since ChatGPT's launch in late 2022, these tools have exploded into mainstream use, with millions of people making requests that require enormous computational power. Unlike earlier AI applications that ran specific, limited tasks, generative AI models are trained on massive datasets and remain "always on," ready to generate content at any moment. The shift from specialized AI tools to general-purpose, widely accessible system has accelerated both adoption and environmental impact.
Yet conversations about AI often focus on capabilities and ethical issues like bias and privacy while environmental dimensions receive far less attention. Part of this silence stems from difficulty in measuring impact: companies rarely disclose energy consumption, researchers lack standardized metrics, and the global infrastructure supporting AI is often invisible to end users.
This resource provides a starting point for understanding AI's environmental footprint across its full lifecycle: from the mining of rare earth minerals to the disposal of outdated hardware. Our goal is to make this information accessible, allowing you to engage with these questions and make informed decisions about your own AI use.
How to use this resource:
This resource follows AI's lifecycle journey from raw material extraction through manufacturing, training, and daily use, ending with environmental justice. Environmental justice recognizes that marginalized communities (particularly communities of color, those in low-income areas, and Indigenous populations) disproportionately bear the burden of environmental hazards while having the least power in environmental decision-making.
While the following sections build on each other to tell a complete story, you don't need to read everything or follow a linear path. Choose the sections most relevant to your interests or questions; the resource is designed for self-directed exploration. Each section includes two types of sources: sources under jumping off provide overviews without heavy technical detail, while sources under deeper dive are more technical and comprehensive, with methods and data. This is a living document, and we will continue updating it as new research emerges.
Getting Started
New to AI’s environmental impact? Start here.The Measurement Challenge
How we track and quantify AI’s footprint.From Chip to Landfill
Mining, manufacturing, and disposal of AI’s hardware.Physical Foundations
Data centers, power systems, and infrastructure demands.Training AI Models
The enormous energy costs of teaching AI systems.AI in Daily Use
Energy consumption from everyday algorithms.Uneven Geographic Burdens
How AI’s costs fall disproportionately on certain regions.
Getting Started
Before we unpack these impacts in detail, here are a few starting points for understanding the scale of AI’s physical footprint:
- Article: “Why Do AI Datacenters Use So Many Resources?” Engadget, Daniel Cooper and Cheyenne MacDonald, October 3, 2025.
- Video: "A.I.'s Environmental Impact Will Threaten Its Own Supply Chain." The New York Times Opinion Video, Video by Kate Crawford, Ryan S. Jeffery and Adam Westbrook, September 26, 2025.
- Video: “We did the math on AI’s energy footprint. Here’s the story you haven’t heard.” Youtube: MIT Technology Review, September 9, 2025.
- Podcast: “How Data Centers Actually Work.” Uncanny Valley:WIRED, Lauren Goode, Michael Calore, and Zoe Schiffer, October 23, 2025.
The Measurement Challenge
Unlike a car’s fuel efficiency or refrigerator energy label, AI systems don’t come with standardized environmental reporting. Companies treat energy consumption and emissions data as proprietary information. Researchers work with incomplete data and estimates. The infrastructure spans global supply chains, making it nearly impossible to track the full picture from rare earth mining to e-waste.
This isn’t just technical, it’s also political and ethical. What we choose to measure shapes what we can address. Should we focus on energy during training, or include manufacturing emissions? Per-query efficiency, or total system impact? Without standardized metrics we can’t compare models, hold companies accountable, or decide which AI applications justify their environmental cost.
This section introduces key frameworks and terminology for understanding AI's environmental footprint. It explains why measurement remains so difficult and explores emerging efforts to create standards. Understanding these foundations is essential for interpreting environmental claims and recognizing what stays unmeasured. As measurement practices evolve, so will this resource.
From Chip to Landfill
Every AI model runs on physical hardware – processors, memory, storage systems – and that hardware carries substantial environmental costs long before it enters a data center. The rare earth minerals essential for modern chips come from mining operations that devastate ecosystems and exploit labor, often in regions far from where AI systems are ultimately deployed. Manufacturing semiconductors requires extraordinary amounts of pure water, toxic chemicals, and energy; production facilities consume resources at scales that strain regional infrastructure.
This material footprint is often invisible in conversations about AI's environmental impact, which tend to focus on operational energy use. But hardware manufacturing accounts for significant lifecycle emissions. Making matters worse, AI infrastructure has remarkably short replacement cycles: chips are frequently swapped out for newer, more efficient architectures after just one to three years. This creates mounting streams of electronic waste.
Unlike operational emissions, which might be reduced through renewable energy, the material impacts of hardware production are harder to mitigate. The environmental and human costs are often externalized to communities near extraction sites, manufacturing facilities, and informal e-water processing operations. This makes AI's material footprint a matter of global environmental justice.
Physical Foundations
When we use AI tools, the interaction feels weightless (a text prompt followed by an instant response). But behind that seamless interface sits massive physical infrastructure: data centers filled with servers that require enormous amounts of electricity to run and even more resources to keep cool. As AI models grow larger and more widely deployed, data centers construction is accelerating globally to meet demand.
Data centers consume staggering amounts of both energy and water. Cooling systems prevent servers from overheating, often using water-intensive evaporative cooling or drawing heavily on local electric grids. In regions already facing water scarcity or grid stress, this demand can strain local resources and communities.
Companies frequently tout their use of renewable energy or carbon offsets, but these claims deserve scrutiny. Purchasing renewable energy credits doesn’t necessarily mean a data center runs on clean power in real time; it is an accounting mechanism, not a direct connection to renewable sources. These purchases don't eliminate the strain corporate demand places on regional grids and water supplies.
Meanwhile, efficiency improvements at individual facilities are often overwhelmed by explosive growth in AI deployment. A data center might become 20% more efficient, but if AI usage doubled or triples, total resource consumption still climbs. It becomes clear that sustainable AI will require not just better technology, but fundamental limits on growth and transparent accountability for local impacts.
Training AI Models
Training a large AI model is enormously resource-intensive. Thousands of specialized processors run continuously for weeks or months, consuming massive amounts of electricity and generating substantial carbon emissions. As models have grown from millions to billions to trillions of parameters, the computational requirements and environmental costs of training have increased exponentially. A single training run for a state-of-the-art model can emit as much carbon as five average cars produce over their entire lifetimes.
Training is often framed as a one-time cost: train once, then deploy it for countless users. But this understates the reality. Models are frequently retrained to improve performance, updated with new data, and fine-tuned for specific applications. Each iteration carries its own environmental cost.
The race toward ever-larger models reveals a tension. The efficiency improvements that make training feasible also enable the construction of even more resource-hungry systems. Without mandatory disclosure and environmental accounting, this cycle will likely continue until external constraints (regulatory, resource-based, climate-related) force a reckoning.
AI in Daily Use
While training a model happens once or periodically, inference (the process of actually using a trained model to generate responses, make predictions, or produce images) happens billions of timers per day across millions of users. Each query requires computational resources and energy. Individual interactions may seem negligible, but at the scale of widespread deployment, inference can rival or exceed training as a source of environmental impact.
This creates what economists call the "rebound effect," or Jevons' Paradox: efficiency improvements that reduce per-query energy costs can paradoxically increase total environmental impact. A model that uses half the energy per query but gets used ten times more often leads to greater environmental harm. As AI systems become more efficient and user-friendly, they integrate more deeply into everyday products and workflows, multiplying the total number of queries.
This paradox means technological efficiency alone cannot solve the problem. We also need to consider questions of appropriate use and scale: Which applications justify AI's resource costs? Where does AI actually add value, and where is it simply convenient? Without grappling with these questions, each efficiency gain may simply accelerate AI's expansion into more domains of daily life, amplifying its overall environmental footprint.
Uneven Geographic Burdens
AI’s environmental footprint is not distributed equally across the globe. The rare earth minerals that make AI hardware possible are extracted primarily in regions of the Global South, where mining operations contaminate water supplies, displace communities, and generate toxic waste with minimal accountability. Semiconductor manufacturing concentrates in areas with fewer labor and environmental protections. When hardware reaches the end of its short lifespan, much of it is shipped as e-waste to countries with limited capacity to process it safely, exposing workers and communities to toxic materials. Meanwhile, the benefits of AI development (the profits, technological capabilities, the decision-making power) concentrate in wealthy nations far removed from these harms.
Within the United States and other wealthy nations, data centers are often sited in communities with less political power to resist them. These facilities draw heavily on local water supplies and electrical grids, sometimes exacerbating existing resource scarcity. Virginia, for example, hosts the world's largest concentration of data centers, creating unprecedented strain on the state's energy infrastructure and raising question about whose needs are prioritized when resources become scarce.
The communities most burdened by AI's material demands are frequently the same communities least responsible for driving AI adoption and most vulnerable to climate impacts that AI's energy consumption helps accelerate. This reflects deeper patterns of environmental injustice: decisions about where to externalize costs and whose environments become "sacrifice zones" for technological progress. These aren't accidental outcomes; they follow historical patterns of how industrial development has distributed benefits and burdens along lines of race, class, and geography.