According to reporting by The Narwhal, a large-scale AI data centre is proposed near Olds, positioned as a significant economic development project tied to Canada’s expanding AI infrastructure footprint.
Globally, AI data centre expansion has triggered serious debate. Energy demand is rising sharply. Land requirements are substantial. Emissions implications remain contested. And perhaps most visibly, water consumption for cooling has become a flashpoint, particularly in regions facing water stress.
However, both the proposed data centre and its attached power facility are scheduled to use closed-loop cooling systems.
A relatively new technology in the data centre context, closed-loop systems recirculate water rather than continuously drawing fresh supplies. When properly designed, they can dramatically reduce net water withdrawals compared to traditional evaporative cooling towers. That matters in a policy environment where water intensity has become one of the loudest criticisms of AI infrastructure.
That said, closed-loop systems are not impact-free. They do not require constant large-volume water additions, but they can demand more energy to operate chillers and pumps. The broader sustainability equation still depends on electricity sourcing, system design, and climate conditions.
And this is where geography matters.
A conceptual depiction of closed-loop cooling infrastructure inside a data centre, highlighting recirculating water systems used to reduce freshwater withdrawals. Image generated with GPT-5.2.
In cooler regions, including much of Canada, facilities can potentially rely on “free cooling” for significant portions of the year, using low ambient temperatures to dissipate heat with far less mechanical chilling. In those contexts, closed-loop systems paired with climate advantages may substantially reduce both water withdrawals and energy intensity.
This is an important reminder that AI infrastructure is not static. It evolves in response to criticism, regulation, and technical constraint. Concerns about water use have been loud and persistent. Engineers are responding. Cooling architectures are changing.
For Canada, there is a strategic opportunity here. As a northern country, long-time U.S. partner, and increasingly active AI investor, Canada is well positioned to host infrastructure that leverages climate advantages to reduce cooling burdens. But that will not happen by default.
Governments funding AI capacity build-out should ensure that facilities are designed to minimize environmental footprints. Closed-loop systems, free cooling integration, and transparent reporting should not be optional design features. They should be baseline expectations.
Progress does not need to be halted. But it should be shaped. And in cases like this, we are seeing in real time how scrutiny can help drive technological evolution in a more sustainable direction.
In a recent editorial for the Global Policy Journal, Olivia Caruso and I examined the aftermath of Hurricane Melissa to highlight a recurring challenge in climate adaptation: recovery consistently receives more attention and funding than risk reduction. While rapid post-disaster financing is essential, it often reinforces a cycle where investment only arrives after damage has already occurred.
Using Jamaica’s experience with Hurricane Melissa, we show how tools like catastrophe bonds can provide fast liquidity following a disaster, but do little to reduce exposure beforehand. These mechanisms help governments respond, but they do not prevent loss of life or infrastructure damage when climate risks are already well understood.
The editorial also reflects on how political incentives and public narratives tend to celebrate recovery and community resilience, while downplaying the governance and investment failures that make such resilience necessary in the first place. Without shifting priorities toward proactive risk reduction, communities remain locked into an increasingly costly cycle of response and rebuilding.
In a recent editorial for the Canadian Science Policy Centre (CSPC), Olivia Caruso and I explored how Canada’s defence spending could play a greater role in building climate-resilient infrastructure. The piece argues that while defence budgets are often viewed solely through the lens of national security, they can also serve as a catalyst for domestic resilience—especially when coordinated with civilian infrastructure investment.
We propose a defence–civilian co-funding mechanism, where federal defence dollars could complement municipal procurement budgets to ensure climate resilience is integrated from the design stage of critical infrastructure projects. This approach would reduce long-term vulnerability and limit the need for costly military disaster response.
The editorial also calls for better alignment between federal defence priorities, public safety objectives, and local infrastructure planning. By bridging these policy areas, Canada could simultaneously advance its climate adaptation goals and strengthen critical systems that underpin both national and community security.
The rise of Generative AI (GenAI) marks one of the most significant and disruptive developments in the history of computing.
GenAI is transforming how we work, think, interact, learn, teach, and create. Its benefits are substantial—but so are the risks. These include misinformation and disinformation, ethical concerns, over-reliance and deskilling, privacy and surveillance, security vulnerabilities, economic disruption, and the growing concentration of power in the hands of a few.
One of the more urgent concerns is the immense energy required to power GenAI systems. This raises pressing questions: How serious is GenAI’s energy use? How does it compare to a Google Search? If GenAI is energy-intensive, what is being done to address it? And how resilient are the infrastructure systems that support it? This article briefly explores these questions and highlights where knowledge gaps remain.
Whether GenAI is viewed as a societal benefit or a threat, one thing is clear: it is here to stay, evolving rapidly, and raising questions we cannot ignore.
How Does GenAI Work?
Training the Model
GenAI models are trained on enormous collections of text, images, audio, and other data drawn from the internet and additional sources. This information is processed by neural networks—digital systems built from mathematical rules and structures designed to mimic how the human brain learns.
Neural networks use these data inputs to recognize patterns and make predictions. For example, when encountering the phrase “Canada is known for its …,” the system learns to predict likely completions such as “… cold winters” or “… maple syrup.” When predictions are incorrect, the network adjusts its internal parameters. Repeating this process billions of times gradually refines its capabilities and ultimately produces a highly capable model.
Neural networks are trained on specialized hardware, like graphics processing units (GPUs). While similar in concept to the GPUs found in personal computers, which are designed for graphics rendering, the GPUs used for GenAI are purpose-built to handle many calculations in parallel, often executing millions of operations per second.
Once a model is trained, it becomes ready for inference—the phase where it generates responses to user prompts.
Visual depiction of a neural network (GPT-4o, modified)
From Prompt to Response
When you enter a prompt like “What is urban resilience?” into a GenAI chatbot like ChatGPT, the trained model draws on its learned probabilities, pattern recognition, and encoded relationships to construct a response. For instance, it identifies key terms such as “resilience” and connects them to related concepts stored in its memory, such as “adaptation” and“climate.”
A robust infrastructure system supports this process, which unfolds in several distinct stages.
1. Making the Request
The application or website (such as chat.openai.com) prepares the request, packaging the user input along with metadata such as session ID, the called model, and configuration parameters.
2. Sending the Request Through the Network
The package is encoded into binary and prepared for transmission over the internet. It passes through your local network infrastructure (router, modem, ethernet) into your internet service provider’s (ISP) systems, traveling through fibre networks and multiple routers and switches until it reaches the cloud provider’s data centre.
3. Processing the Request at the Data Centre
GenAI prompts are routed to an inference server—a cluster of GPUs that hold the trained AI model. Small models may run on a single GPU, while large models like GPT-4o may require up to 64 GPUs per instance. Each instance can process multiple requests simultaneously through batching, but if the demand exceeds capacity, additional instances are used, or requests are queued.
4. Returning the Response
Once the response is generated, it is encoded and packaged again, routed back through the fibre networks and ISP infrastructure, and delivered to your local network. Your device then unpacks and decodes the response, rendering it for viewing (if text, images, or video) or playing it through speakers (if audio).
This process is not fundamentally different from how most internet-based actions work: your device sends requests across networks to data centres, where responses are generated and sent back. What sets GenAI apart is the data centre configuration—where a standard Google Search relies on energy-efficient central processing units (CPUs) to query static databases, GenAI inference servers require energy-intensive GPUs to run advanced models and perform complex, real-time computations.
How Much Energy Does GenAI Use?
Global Energy Estimates and Projections
In 2024, Goldman Sachs reported that a single ChatGPT prompt (~2.9 watt-hours) used about 10 times the energy of a typical Google Search query (~0.3 watt-hours) and warned that the rise of GenAI could drive data centre power demand up by 160% by 2030.
In the US, data centres could draw 8% of the nation’s total power, with utilities projected to invest around $50 billion in new generation capacity to meet this demand. In Europe, nearly $1 trillion will be needed over the next decade to upgrade transmission and distribution systems to keep pace with growing data centre loads—particularly challenging for their aging grid infrastructure.
Why Energy Use Estimates Vary
At first glance, these numbers seem alarming, especially given the rapid uptake of GenAI tools and the surge in prompt activity. Yet, energy use estimates vary widely:
Researchers from the University of Rhode Island, University of Tunis, and Providence College estimate that a short GPT-4o query uses ~0.42 watt-hours, about 1.4 times a Google Search.
Sam Altman, CEO of OpenAI, stated that the average query uses about 0.34 watt-hours, about what a high-efficiency lightbulb consumes in a couple of minutes, or 1.2 times a Google Search.
These differences may reflect what each source is measuring. Some may include the full system—data centre cooling, power losses, and network energy—while others refer only to GPU inference energy. Variation may also stem from whether averages over time or spot measurements are used, the hardware or models assessed, or reporting bias. The takeaway is that we lack clear, standardized benchmarks for GenAI’s energy footprint, making it difficult to compare precisely against other digital activities like search.
High-voltage power lines supplying energy to a data centre (GPT-4o, modified)
Task Complexity Matters
Although the exact energy footprint of GenAI remains unclear, what is clear is that task complexity plays a major role. A simple text response from an AI model may use up to ~1 watt-hour, high-resolution image generation can require up to ~3 watt-hours, and generating a five-second video may consume up to ~1,000 watt-hours.
To put this into perspective, MIT Technology Review describe a scenario where a marathon charity runner uses GenAI to ask 15 fundraising questions, create 10 flyer images, and generate three five-second Instagram videos, consuming ~2,900 watt-hours—roughly the energy needed to ride 160 kilometres on an e-bike, drive 16 kilometres in an electric car, or run a microwave for over three and a half hours.
What Is the Alternative?
Again, while these numbers may sound alarming—and it is important to acknowledge that they may not accurately reflect reality—one critical factor is often missing from the discussion: what is the alternative?
At present, GenAI use is surging, in part because it is new, and people, organizations, and industries are still exploring its capabilities and potential uses. Recall when ChatGPT introduced image generation earlier this year: OpenAI soon had to limit image requests, with Sam Altman half-joking that the GPUs were melting under the influx of demands. This is typical of the surge that comes with any widely adopted novel technology.
But importantly: if not GenAI, then what? Before GenAI, the marathon runner would likely have turned to Google Search for fundraising advice. How many searches would it have taken to gather comparable answers? How many websites loaded, videos watched, social media posts skimmed, or forum threads browsed? Each of those steps consumes energy.
As for the image and video materials, without GenAI, the runner might have hired a designer—involving web searches to find a vendor, back-and-forth emails, online searches for inspiration, and ultimately, the artist using their own energy-intensive design tools. Large media files would then be shared online, adding further to the total energy footprint.
Without detailed comparative studies, it remains uncertain whether GenAI increases overall energy consumption—but it is a question that deserves more careful investigation.
Why Is Nuclear Power Tied to GenAI?
Today’s Increasing Energy Demands
As global energy demand increases—particularly from computing, networking, and data systems—governments are under pressure to expand electricity production. Clean energy sources like wind, solar, hydro, geothermal, and nuclear are central to these efforts. Among them, nuclear stands out for its ability to deliver large-scale, consistent, low-carbon electricity. In fact, nuclear is the largest single low-carbon electricity source among advanced economies, and achieving climate targets will be significantly more difficult without expanded investment in nuclear infrastructure.
At the same time, many power grids are now integrating a diverse mix of renewable sources. But this diversity brings added complexity. Following a major blackout in the Iberian Peninsula in April 2025, some commentators blamed over-reliance on intermittent renewables. Grid experts pushed back, arguing that while renewables do add complexity, effective system design and management are the determining factors. Still, many point to nuclear power as a stabilizing force—capable of providing secure, dispatchable electricity to complement renewable-heavy grids.
GenAI’s Accelerating Influence
Alongside climate goals and energy security concerns, the GenAI boom has intensified calls for nuclear expansion. Data centres supporting GenAI require enormous, around-the-clock electricity—beyond what many grids can currently deliver.
In late 2024, Microsoft announced a 20-year deal to source power from the soon-to-reopen Three Mile Island nuclear plant to supply its growing fleet of data centres. Earlier this year, the Trump Administration announced the $500 billion Stargate AI Project, a partnership between OpenAI, Oracle, and SoftBank, aiming to build next-generation AI data centres—and actively exploring nuclear power to meet their vast energy needs.
Inside a small modular reactor facility (GPT-4o, modified)
The Rise of Small Modular Reactors
Amazon, too, has announced investments in small modular reactors (SMRs)—compact, lower-cost nuclear units that can be deployed closer to grids and brought online more quickly than traditional reactors. SMRs are seen as a promising solution to help meet the surging power demands of AI, cloud computing, and other innovation sectors.
Ontario, notably, is positioning itself as a leader in SMR development. The province recently approved the construction of four SMRs at the Darlington nuclear site, marking the first such project among G7 countries. This investment reflects Ontario’s strategy to prioritize electricity access for data centres, viewing the sector as a driver of economic growth, innovation, and job creation.
Ontario’s bet on SMRs reflects a broader trend: as data centres grow and GenAI expands, nuclear power is increasingly seen as part of the solution to future energy needs. This renewed interest may help shift perspectives shaped by past nuclear fears, but it also raises a critical question: how resilient are these systems?
What Does This Mean for Resilience?
Systems Under Strain
As GenAI accelerates and investments in next-generation energy sources like SMRs grow, we must look beyond innovation and ask whether these complex systems are being built to endure disruption. In a world increasingly shaped by climate shocks, geopolitical tension, and digital threats, resilience is not optional—it is essential.
Even as technology advances, overlapping crises underscore the need to design critical infrastructure—including those systems powering GenAI—with resilience at the core. SMRs are framed as a clean, stable energy solution, but their reliability is not assured. Research shows that climate-linked outages at nuclear plants have increased in recent years. While SMRs tout advantages like passive cooling and flexible siting, they remain vulnerable to extreme weather, water shortages, and cyber risks. As we scale these systems, we need to ask: are they ready for the challenges ahead?
Streets overwhelmed by extreme rainfall and flash flooding (GPT-4o, modified)
Learning From Past Infrastructure Failures
I was camping with my parents along Lake Huron in the summer of 2003. On August 14, we spent the day at the beach, unaware that a massive blackout was unfolding across the region. The sun was setting as we packed up and hit the road, and we noticed something was not right—streetlights were dark, gas stations had closed, and traffic signals were not working. We later learned the power was out across much of Ontario and the northeastern US.
Sagging transmission lines—along with other seemingly minor human and technical errors—had triggered a chain reaction, shutting down more than 260 power plants, including several nuclear stations, and leaving 50 million people in the dark. Most areas regained power within days, but the blackout made one thing clear: we are deeply dependent on critical infrastructure.
As climate shocks, global instability, and digital threats grow, large-scale disruptions are becoming more likely. With GenAI advancing quickly—and the infrastructure behind it expanding just as fast—we need to ask: how resilient are these systems? If they fail, how fast can they recover? And what will it mean for society if the systems we rely on most are also among the ones most vulnerable to failure?