The rise of Generative AI (GenAI) marks one of the most significant and disruptive developments in the history of computing.
GenAI is transforming how we work, think, interact, learn, teach, and create. Its benefits are substantial—but so are the risks. These include misinformation and disinformation, ethical concerns, over-reliance and deskilling, privacy and surveillance, security vulnerabilities, economic disruption, and the growing concentration of power in the hands of a few.
One of the more urgent concerns is the immense energy required to power GenAI systems. This raises pressing questions: How serious is GenAI’s energy use? How does it compare to a Google Search? If GenAI is energy-intensive, what is being done to address it? And how resilient are the infrastructure systems that support it? This article briefly explores these questions and highlights where knowledge gaps remain.
Whether GenAI is viewed as a societal benefit or a threat, one thing is clear: it is here to stay, evolving rapidly, and raising questions we cannot ignore.
How Does GenAI Work?
Training the Model
GenAI models are trained on enormous collections of text, images, audio, and other data drawn from the internet and additional sources. This information is processed by neural networks—digital systems built from mathematical rules and structures designed to mimic how the human brain learns.
Neural networks use these data inputs to recognize patterns and make predictions. For example, when encountering the phrase “Canada is known for its …,” the system learns to predict likely completions such as “… cold winters” or “… maple syrup.” When predictions are incorrect, the network adjusts its internal parameters. Repeating this process billions of times gradually refines its capabilities and ultimately produces a highly capable model.
Neural networks are trained on specialized hardware, like graphics processing units (GPUs). While similar in concept to the GPUs found in personal computers, which are designed for graphics rendering, the GPUs used for GenAI are purpose-built to handle many calculations in parallel, often executing millions of operations per second.
Once a model is trained, it becomes ready for inference—the phase where it generates responses to user prompts.

From Prompt to Response
When you enter a prompt like “What is urban resilience?” into a GenAI chatbot like ChatGPT, the trained model draws on its learned probabilities, pattern recognition, and encoded relationships to construct a response. For instance, it identifies key terms such as “resilience” and connects them to related concepts stored in its memory, such as “adaptation” and “climate.”
A robust infrastructure system supports this process, which unfolds in several distinct stages.
1. Making the Request
The application or website (such as chat.openai.com) prepares the request, packaging the user input along with metadata such as session ID, the called model, and configuration parameters.
2. Sending the Request Through the Network
The package is encoded into binary and prepared for transmission over the internet. It passes through your local network infrastructure (router, modem, ethernet) into your internet service provider’s (ISP) systems, traveling through fibre networks and multiple routers and switches until it reaches the cloud provider’s data centre.
3. Processing the Request at the Data Centre
GenAI prompts are routed to an inference server—a cluster of GPUs that hold the trained AI model. Small models may run on a single GPU, while large models like GPT-4o may require up to 64 GPUs per instance. Each instance can process multiple requests simultaneously through batching, but if the demand exceeds capacity, additional instances are used, or requests are queued.
4. Returning the Response
Once the response is generated, it is encoded and packaged again, routed back through the fibre networks and ISP infrastructure, and delivered to your local network. Your device then unpacks and decodes the response, rendering it for viewing (if text, images, or video) or playing it through speakers (if audio).
This process is not fundamentally different from how most internet-based actions work: your device sends requests across networks to data centres, where responses are generated and sent back. What sets GenAI apart is the data centre configuration—where a standard Google Search relies on energy-efficient central processing units (CPUs) to query static databases, GenAI inference servers require energy-intensive GPUs to run advanced models and perform complex, real-time computations.
How Much Energy Does GenAI Use?
Global Energy Estimates and Projections
In 2024, Goldman Sachs reported that a single ChatGPT prompt (~2.9 watt-hours) used about 10 times the energy of a typical Google Search query (~0.3 watt-hours) and warned that the rise of GenAI could drive data centre power demand up by 160% by 2030.
In the US, data centres could draw 8% of the nation’s total power, with utilities projected to invest around $50 billion in new generation capacity to meet this demand. In Europe, nearly $1 trillion will be needed over the next decade to upgrade transmission and distribution systems to keep pace with growing data centre loads—particularly challenging for their aging grid infrastructure.
Why Energy Use Estimates Vary
At first glance, these numbers seem alarming, especially given the rapid uptake of GenAI tools and the surge in prompt activity. Yet, energy use estimates vary widely:
- MIT News reports that a ChatGPT prompt consumes about 5 times the energy of a Google Search query, or ~1.5 watt-hours using Goldman Sachs’ numbers.
- Researchers from the University of Rhode Island, University of Tunis, and Providence College estimate that a short GPT-4o query uses ~0.42 watt-hours, about 1.4 times a Google Search.
- Sam Altman, CEO of OpenAI, stated that the average query uses about 0.34 watt-hours, about what a high-efficiency lightbulb consumes in a couple of minutes, or 1.2 times a Google Search.
These differences may reflect what each source is measuring. Some may include the full system—data centre cooling, power losses, and network energy—while others refer only to GPU inference energy. Variation may also stem from whether averages over time or spot measurements are used, the hardware or models assessed, or reporting bias. The takeaway is that we lack clear, standardized benchmarks for GenAI’s energy footprint, making it difficult to compare precisely against other digital activities like search.

Task Complexity Matters
Although the exact energy footprint of GenAI remains unclear, what is clear is that task complexity plays a major role. A simple text response from an AI model may use up to ~1 watt-hour, high-resolution image generation can require up to ~3 watt-hours, and generating a five-second video may consume up to ~1,000 watt-hours.
To put this into perspective, MIT Technology Review describe a scenario where a marathon charity runner uses GenAI to ask 15 fundraising questions, create 10 flyer images, and generate three five-second Instagram videos, consuming ~2,900 watt-hours—roughly the energy needed to ride 160 kilometres on an e-bike, drive 16 kilometres in an electric car, or run a microwave for over three and a half hours.
What Is the Alternative?
Again, while these numbers may sound alarming—and it is important to acknowledge that they may not accurately reflect reality—one critical factor is often missing from the discussion: what is the alternative?
At present, GenAI use is surging, in part because it is new, and people, organizations, and industries are still exploring its capabilities and potential uses. Recall when ChatGPT introduced image generation earlier this year: OpenAI soon had to limit image requests, with Sam Altman half-joking that the GPUs were melting under the influx of demands. This is typical of the surge that comes with any widely adopted novel technology.
But importantly: if not GenAI, then what? Before GenAI, the marathon runner would likely have turned to Google Search for fundraising advice. How many searches would it have taken to gather comparable answers? How many websites loaded, videos watched, social media posts skimmed, or forum threads browsed? Each of those steps consumes energy.
As for the image and video materials, without GenAI, the runner might have hired a designer—involving web searches to find a vendor, back-and-forth emails, online searches for inspiration, and ultimately, the artist using their own energy-intensive design tools. Large media files would then be shared online, adding further to the total energy footprint.
Without detailed comparative studies, it remains uncertain whether GenAI increases overall energy consumption—but it is a question that deserves more careful investigation.
Why Is Nuclear Power Tied to GenAI?
Today’s Increasing Energy Demands
As global energy demand increases—particularly from computing, networking, and data systems—governments are under pressure to expand electricity production. Clean energy sources like wind, solar, hydro, geothermal, and nuclear are central to these efforts. Among them, nuclear stands out for its ability to deliver large-scale, consistent, low-carbon electricity. In fact, nuclear is the largest single low-carbon electricity source among advanced economies, and achieving climate targets will be significantly more difficult without expanded investment in nuclear infrastructure.
At the same time, many power grids are now integrating a diverse mix of renewable sources. But this diversity brings added complexity. Following a major blackout in the Iberian Peninsula in April 2025, some commentators blamed over-reliance on intermittent renewables. Grid experts pushed back, arguing that while renewables do add complexity, effective system design and management are the determining factors. Still, many point to nuclear power as a stabilizing force—capable of providing secure, dispatchable electricity to complement renewable-heavy grids.
GenAI’s Accelerating Influence
Alongside climate goals and energy security concerns, the GenAI boom has intensified calls for nuclear expansion. Data centres supporting GenAI require enormous, around-the-clock electricity—beyond what many grids can currently deliver.
In late 2024, Microsoft announced a 20-year deal to source power from the soon-to-reopen Three Mile Island nuclear plant to supply its growing fleet of data centres. Earlier this year, the Trump Administration announced the $500 billion Stargate AI Project, a partnership between OpenAI, Oracle, and SoftBank, aiming to build next-generation AI data centres—and actively exploring nuclear power to meet their vast energy needs.

The Rise of Small Modular Reactors
Amazon, too, has announced investments in small modular reactors (SMRs)—compact, lower-cost nuclear units that can be deployed closer to grids and brought online more quickly than traditional reactors. SMRs are seen as a promising solution to help meet the surging power demands of AI, cloud computing, and other innovation sectors.
Ontario, notably, is positioning itself as a leader in SMR development. The province recently approved the construction of four SMRs at the Darlington nuclear site, marking the first such project among G7 countries. This investment reflects Ontario’s strategy to prioritize electricity access for data centres, viewing the sector as a driver of economic growth, innovation, and job creation.
Ontario’s bet on SMRs reflects a broader trend: as data centres grow and GenAI expands, nuclear power is increasingly seen as part of the solution to future energy needs. This renewed interest may help shift perspectives shaped by past nuclear fears, but it also raises a critical question: how resilient are these systems?
What Does This Mean for Resilience?
Systems Under Strain
As GenAI accelerates and investments in next-generation energy sources like SMRs grow, we must look beyond innovation and ask whether these complex systems are being built to endure disruption. In a world increasingly shaped by climate shocks, geopolitical tension, and digital threats, resilience is not optional—it is essential.
Consider the rise in extreme weather: the US National Weather Service has issued over 3,000 flash flood warnings in 2025 alone—more than in any year since 1986. Central Texas saw at least 120 deaths from recent floods. South Korea has faced deadly rains and mass evacuations. China is grappling with record-breaking heat and $7.6 billion in disaster losses. And in Canada, wildfires have already burned twice the average area seen over the past decade.
Even as technology advances, overlapping crises underscore the need to design critical infrastructure—including those systems powering GenAI—with resilience at the core. SMRs are framed as a clean, stable energy solution, but their reliability is not assured. Research shows that climate-linked outages at nuclear plants have increased in recent years. While SMRs tout advantages like passive cooling and flexible siting, they remain vulnerable to extreme weather, water shortages, and cyber risks. As we scale these systems, we need to ask: are they ready for the challenges ahead?

Learning From Past Infrastructure Failures
I was camping with my parents along Lake Huron in the summer of 2003. On August 14, we spent the day at the beach, unaware that a massive blackout was unfolding across the region. The sun was setting as we packed up and hit the road, and we noticed something was not right—streetlights were dark, gas stations had closed, and traffic signals were not working. We later learned the power was out across much of Ontario and the northeastern US.
Sagging transmission lines—along with other seemingly minor human and technical errors—had triggered a chain reaction, shutting down more than 260 power plants, including several nuclear stations, and leaving 50 million people in the dark. Most areas regained power within days, but the blackout made one thing clear: we are deeply dependent on critical infrastructure.
As climate shocks, global instability, and digital threats grow, large-scale disruptions are becoming more likely. With GenAI advancing quickly—and the infrastructure behind it expanding just as fast—we need to ask: how resilient are these systems? If they fail, how fast can they recover? And what will it mean for society if the systems we rely on most are also among the ones most vulnerable to failure?

Josh Grignon
Josh Grignon is a PhD researcher in the Department of Geography and Environment at Western University. His work examines governance, policy, and cross-sector collaboration for urban climate resilience, with a focus on adaptive governance, public–private partnerships, and infrastructure planning to address extreme weather and long-term climate risks.
