Power Generation, Transmission & Distribution 2025

Last Updated July 17, 2025

USA – Washington

Trends and Developments


Authors



Kilpatrick Townsend & Stockton understands the legal and business objectives of its clients both in traditional and in renewable energy sectors. Whether requiring assistance with project finance, seeking energy investment opportunities or evaluating energy procurement strategies, Kilpatrick is equipped to meet clients’ needs in the energy sector. The firm collaborates with clients who shape the future of clean energy, including in production, storage solutions, sustainable fuels and carbon recapture. Simultaneously, it engages in the oil and gas sector, acknowledging its essential role in the transition to renewable energy. Kilpatrick’s approach is a blend of strategic thinking, meticulous research and innovative problem-solving, all aimed at delivering exceptional results. It serves a wide range of clients across the industry, including organisations developing or financing innovative projects in solar, biomass, wind, clean fuels, nuclear and energy storage. The firm offers not just legal practitioners but also partners in its clients’ pursuit of a sustainable and prosperous future.

AI Data Centres and the Looming Energy Crisis in the United States

The accelerated proliferation of artificial intelligence (AI) technologies has ushered in a new era of innovation and productivity across various sectors. However, alongside these transformative advancements lies a growing challenge – the rapidly increasing energy demand stemming from AI data centres. These facilities, which power large language models (LLMs) and generative AI applications, are becoming major consumers of electricity in the United States and are threatening to outpace the country’s current power infrastructure, the necessary grid infrastructure, and national and state policy capabilities.

Use case example

A prompt to prepare a graphic for a presentation was as follows: “Hello AI model, I would like to generate an image of a data centre showing a layout of servers.” The energy required for this simple task can range from 0.01 to 0.29 kilowatt-hours (kWh), depending on the AI model, according to recent studies. To contextualise, this is comparable to charging a smartphone from empty to full.

Text-based queries exhibit similar patterns. For instance, prompting a generative AI model with “AI model, tell me how much energy AI uses” consumes far more energy than a typical online search. While a Google search uses about 0.3 watt-hours, generative AI models consume ten to 30 times more per prompt. This highlights the substantial increase in power needed for advanced AI computation.

The scale of AI usage is staggering. Over 30 million new images and 3.5 billion searches are generated daily, rapidly escalating electricity demand. In addition to operational use, training these models also requires significant energy. For example, training the GPT-3 model is estimated to have used about 1,300 megawatt-hours (MWh) of electricity in a month. For the more complex GPT-4, greater energy requirements are expected, with estimates suggesting that the hardware footprint was 20 times greater, and the training spanned three months. These figures underscore the growing environmental and infrastructural challenges of large-scale AI.

AI models fall into “generic” and “specialised” categories. Generic models, like LLMs, are designed for a wide range of tasks, typically using graphics processing units (GPUs), optimised for large data volumes. Specialised models, by contrast, are tailored for specific tasks and can be more efficient, running on GPUs or customised chipsets. Generic models use much more energy than specialised ones – often ten to 50 times more – during both training and deployment.

As both types of AI models become more prevalent, the demand for electricity driven by GPUs and specialised hardware continues to increase, raising important considerations for energy infrastructure, regulatory compliance and policy.

The rapid rise of AI-driven energy demand and electricity consumption trends

As of 2023, US data centres consumed approximately 4.4% of the country’s total electricity production. Projections by the Department of Energy (DOE) suggest this figure could rise to between 6.7% and 12% by 2028, primarily driven by the computing intensity of AI workloads. LLMs, natural language processing engines, and other generative AI tools require vast amounts of data and continuous high-performance processing, contributing heavily to this growth.

From a historical lens, energy consumption by data centres in the United States has almost tripled in less than a decade, from 76 terawatt-hours (TWh) of electricity in 2022 to more than 176 TWh in 2023 (marking a 131.6% increase). Looking forwards, AI-related electricity demand could increase to more than 580 TWh by 2028. This reflects a significant shift and challenge in the country’s electricity production, transmission grid and consumption landscape, as well as the energy planning and policies of the state, regional and national policy makers (for perspective, the global data centre electricity usage in 2022 was about 460 TWh and was projected to exceed 1,000 TWh by 2026, more than doubling the 2022 figure).

Efficiency plateau and cooling requirements

For more than a decade, data centre operators have relied on advances in server performance, virtualisation, workload optimisation and cooling technologies to meet growing demand without a proportional rise in energy consumption. Innovations in chip design – such as improved instruction sets, energy-efficient central processing units (CPUs) and GPUs, and dynamic workload scaling – allowed data centres to manage increasingly complex tasks while maintaining a stable power profile. Similarly, improvements in infrastructure – such as high-density server configurations, airflow management, and the adoption of hot and cold aisle containment systems – helped reduce waste and maximise thermal efficiency.

Cooling technologies have also evolved. Traditional air-cooling systems, which rely on ambient air and fans to maintain safe operating temperatures, were gradually supplemented or replaced by more efficient systems, including liquid cooling, immersion cooling and evaporative systems. Among these, water cooling emerged as a leading solution, offering better thermal conductivity than air and the ability to cool high-density servers more effectively, especially in AI workloads that generate concentrated heat over sustained periods.

However, these efficiency improvements have plateaued since 2020. The underlying reason is the exponential increase in computational intensity introduced by AI. AI training, particularly for LLMs and generative networks, involves processing massive datasets through complex neural networks. These models require custom hardware accelerators, such as NVIDIA’s B100 (Blackwell) GPUs or Google’s TPU v4 clusters, which can draw several kilowatts per server rack. As AI workloads scale, traditional gains from Moore’s Law and energy-saving algorithms are increasingly insufficient to offset the power draw.

The associated thermal loads from AI operations have surpassed the capabilities of many legacy cooling systems, prompting a rapid transition to liquid-based solutions. Water cooling, which uses chilled water circulated through pipes, plates or direct-to-chip systems, provides far greater heat removal capacity. However, it also raises significant environmental and operational challenges. Hence, water usage in data centres has become a major concern in arid regions such as the southwestern United States, where competing demands from agriculture, residential use and industry already stress local water supplies. For example, some hyperscale data centres can consume millions of gallons of water per day for cooling, especially during peak load conditions or in older facilities without closed-loop systems. Furthermore, sourcing, treating and discharging large volumes of water require infrastructure that not all regions possess or can sustainably support.

To address these issues, some operators are investing in waterless cooling technologies, such as direct-to-chip liquid cooling, refrigerant-based cooling and phase-change materials, which offer improved sustainability. Additionally, AI-driven facility management and digital twins have been deployed to fine-tune energy usage and thermal flows in real-time. Despite these innovations, the balance between performance, energy efficiency and environmental sustainability remains a concern.

Thus, while historical efficiency gains in server hardware and cooling enabled data centre expansion without parallel growth in energy use, the rise of AI has disrupted this equilibrium. Meeting future AI demands will require a new generation of energy- and water-efficient technologies, initiative-taking policy frameworks, and sustainable infrastructure planning to avoid exacerbating ecological and utility grid pressures.

Regional impacts and grid stress

The rapid expansion of AI, particularly through data centres housing powerful computing clusters, is placing an unprecedented load on the United States’ electrical grid, a system originally designed for a far less energy-intensive economy. These demands are not only increasing the total amount of electricity required but are also altering when and where that electricity is needed, exacerbating existing strain points across the grid.

The United States’ power grid is essentially composed of three major interconnections – Eastern, Western and Texas (ERCOT) – which were largely constructed in the mid-20th century and have seen only incremental upgrades in the decades since. While these systems have served traditional industrial and residential needs well, they were not engineered to support the explosive, concentrated energy draw characteristic of modern AI data centres. Unlike conventional load growth, which occurs gradually and is geographically dispersed, AI-related energy demands are sudden, large-scale and often localised, overwhelming local distribution systems and creating severe capacity shortfalls.

One of the most pressing challenges lies in the limited capacity of existing transmission lines. High-voltage transmission corridors are essential for moving electricity from concentrated generation assets such as nuclear, gas or coal-fired plants as well as remote renewable energy sources (such as wind farms in the Midwest or solar plants in the Southwest) to the urban centres where AI data centres are often located. However, developing new transmission lines can take 15 to 30 years (depending on the jurisdiction), due to complex permitting, environmental reviews, land-use disputes, and state and federal regulatory hurdles across state lines. These delays are particularly problematic when technology infrastructure expands on much shorter timelines, months or a few years, creating misalignments between energy supply infrastructure and digital demand growth.

Regionally, the vulnerabilities are even more stark. For instance, in California – despite it being a leader in tech and clean energy – grid congestion and substantial permitting delays prevent the swift deployment of new transmission lines and substations. High electricity prices and frequent strain alerts from the California Independent System Operator (CAISO) are becoming common, particularly during heatwaves or peak AI training cycles.

The electrification of other sectors, such as transportation and heating, adds further stress to the grid, leading to compounding demand pressures just as utilities are being asked to decarbonise and modernise simultaneously. The grid’s growing complexity also introduces reliability risks, including potential blackouts, frequency instability and reduced resilience to extreme weather events – issues that are magnified when large data centres operate with tight uptime requirements.

Thus, AI is not merely consuming more electricity but is doing so in a way that exposes systemic weaknesses in the US energy grid. To meet this scenario, the United States (and other countries and regions) must accelerate investment in grid modernisation, inter-regional transmission and flexible energy resources, while rethinking regulatory frameworks to enable faster alignment between technological growth and infrastructure development.

The Great Migration of AI Data Infrastructure: From Coastal Hubs to Interior States

As the energy-intensive demands of AI infrastructure continue to surge, major technology firms are increasingly shifting their focus away from traditional data centre hubs on the West Coast. West Coast states, despite their technological dominance, are grappling with deep-rooted infrastructure, regulatory and environmental constraints. In response, states such as Nevada, Texas and parts of the Southeast and Midwest are emerging as new epicentres for AI-driven data infrastructure. This migration reflects a broader realignment of digital infrastructure priorities, balancing land availability, energy accessibility, regulatory simplicity and economic incentives.

Why AI Firms Are Moving Away From the West Coast

Permitting bottlenecks and regulatory overhead

In states such as California, the process for siting, permitting and constructing new data centres or transmission infrastructure is notoriously slow. Environmental regulations (eg, California’s CEQA), community opposition, multi-agency review processes, and litigation can delay projects by five to ten years or more. In the fast-paced world of AI, where training models and scaling computer resources often move on a quarterly or annual basis, such delays are untenable.

Electric grid saturation

The West Coast energy grids are already operating under considerable strain. AI data centres require stable, continuous and high-volume electricity, often 50 to 100 MW per site, and sometimes much more. California, in particular, faces transmission congestion, limited interconnection capacity and frequent power reliability issues, especially during heatwaves; it also faces limited capacity to expand renewable or firm generation, while long-term utility financial constraints limit the pace of grid upgrades.

Cost and scarcity of energy resources

In California, retail electricity rates are among the highest in the USA, exacerbated by wildfire liability costs, transmission build-outs and climate resiliency fees. As firms seek low-cost, scalable electricity, these high-cost regions are becoming less attractive.

Why Nevada, Texas and Other Interior States Are Attracting AI Infrastructure

Abundant land and lower costs

States such as Nevada and Texas offer vast tracts of undeveloped or industrial land suitable for data centre campuses. Land costs are significantly lower than in urban California or the Puget Sound area, and local governments often offer tax incentives or zoning fast-tracks to attract investment. This enables hyperscalers and colocation firms to build massive data centre footprints quickly.

Favourable energy markets and grid flexibility

Texas, with its independently operated grid (ERCOT), has become a magnet for energy-intensive industries, including crypto mining, semiconductor fabrication, and now AI. While ERCOT has its own vulnerabilities (notably during winter storms), it features competitive wholesale energy pricing, a deregulated retail market and rapid interconnection timelines. Texas also leads the nation in installed wind power capacity and is rapidly scaling solar generation, allowing firms to access renewable energy at scale. AI data centres often require a blend of renewable and dispatchable power, something that Texas offers with its diverse generation mix (wind, solar, battery storage and natural gas).

Nevada, while smaller in grid capacity, benefits from ample solar resources, access to regional transmission through NV Energy, and proximity to California markets, without the same permitting restrictions. Nevada has also become a logistics and tech hub, with proximity to Reno and Las Vegas offering both workforce and connectivity advantages.

Faster build timelines

States such as Texas and Nevada generally have fewer permitting layers, more predictable timelines and more business-friendly environments. For companies needing to deploy data infrastructure within 12 to 24 months, these states offer a competitive edge. Moreover, right-to-build zoning, utility co-operation and state-level co-ordination offices (such as the Texas Economic Development Council) help streamline project delivery.

Strategic proximity and connectivity

Many of these new sites are strategically located near major fibre backbones, highways and airports, ensuring high-speed data transfer and operational logistics. Locations such as Austin, Dallas-Fort Worth, Phoenix and Las Vegas are quickly developing into Tier-1 data centre markets, rivalling more established regions in connectivity and infrastructure support.

A decentralising AI landscape

The geographic shift in AI infrastructure development represents more than a cost-saving manoeuvre; it reflects a strategic repositioning of the digital economy to align with grid realities, permitting landscapes and regional growth potential. As AI continues to evolve and demand even more energy and computational density, the United States will see continued decentralisation of data centre investment, away from legacy tech hubs and into regions that offer the flexibility, scalability and speed that AI infrastructure now demands. However, this shift also raises questions about energy equity, grid resilience, resource allocation and local environmental impacts – issues that will need to be addressed through co-ordinated state and federal policy frameworks as the digital infrastructure map of America continues to evolve.

National Implications: Infrastructure and Reliability Risks

The regional issues discussed above reflect a broader national challenge – the United States’ power infrastructure is not evolving quickly enough to meet the rising energy demands of AI. Without strategic investments in grid modernisation, renewable integration and inter-regional transmission capacity, the country faces serious risks, including:

  • power reliability issues, particularly during peak demand periods;
  • escalating electricity prices as scarcity drives up market costs; and
  • delays in AI adoption and innovation if energy access becomes a limiting factor.

The Limits of Renewable Energy and the Ongoing Role of Fossil Fuels in AI-Powered Grid Demands

As AI data centres proliferate across the United States, technology firms are increasingly investing in renewable energy assets to meet sustainability goals, mitigate reputational risk and ensure long-term electricity affordability. These investments include solar photovoltaic farms, onshore wind generation (which at present is in a questionable state) and battery energy storage systems (BESS). Many firms, such as Google, Amazon, Microsoft and Meta, have entered into large-scale power purchase agreements (PPAs) for renewable energy, often in combination with direct investments in generation infrastructure. However, the reality on the ground is that despite these efforts fossil fuels (particularly natural gas) remain indispensable to ensuring the stability and reliability of the electric grid, especially as AI-related electricity demand becomes more intensive and continuous.

Renewables – growing, but intermittent

While the costs of solar and wind power have plummeted in the past decade, their inherent intermittency presents a major challenge for grid operators. Solar generation is only available during daylight hours, and its output can fluctuate with cloud cover and seasonal variation. Wind power is even less predictable, with output depending on regional wind patterns that may not align with periods of peak demand. This variability becomes critical when serving AI data centres, which require round-the-clock, high-density and reliable power to run training models, maintain uptime and meet service-level agreements (SLAs). Even the most sophisticated load balancing and grid forecasting technologies cannot fully compensate for sudden drops in solar or wind output, making backup or firming power a non-negotiable necessity.

BESS – promising, but not yet a replacement for gas

Battery energy storage systems (BESS), such as those using lithium-ion or newer chemistries like iron flow or sodium-ion, are being widely deployed to store excess renewable energy and release it during periods of low generation or high demand. However, today’s commercial BESS installations generally provide between two and six hours of discharge capacity, with utility-scale projects sometimes reaching eight hours or more. While this duration is sufficient for short-term frequency regulation and load shifting, it falls short of the multi-day reliability required during extended periods of low renewable output or severe weather events. Moreover, the cost and material constraints of large-scale battery systems (including lithium, cobalt and nickel) limit how quickly and widely they can be deployed as a full substitute for fossil-fuel backup.

Natural gas – the current backbone of grid reliability

Owing to the aforementioned limitations, natural gas remains the most practical and responsive firm power source in most US grids. Gas turbines and combined-cycle plants can be ramped up quickly to meet spikes in demand, making them essential for filling the gaps left by renewables. This flexibility is especially crucial in regions with high data centre concentrations, where load increases may be sudden and substantial. In many states, peaker plants (gas-fired facilities designed to operate during periods of peak electricity use) are still the last line of defence against brownouts and blackouts. They are also relatively cost-effective compared to the capital-intensive and longer-developing alternatives such as nuclear or pumped hydro.

Nuclear and hydropower – stable, but regionally limited

Nuclear energy and large hydroelectric systems also play critical roles in providing baseload, carbon-free electricity. However, their scalability is highly constrained because nuclear plants are politically and economically controversial. While they offer excellent reliability and zero carbon emissions, public opposition, high upfront costs, regulatory complexity and long construction timelines (ten or more years) have stymied new development in most of the country. Small modular reactors (SMRs) offer future potential, but they are still in pilot stages and years away from commercial readiness. Hydropower is also developed out in many regions.

Grid integration challenges

Even as renewable generation expands, the lack of adequate grid infrastructure to integrate and deliver their output – especially over long distances – is another limiting factor. Many renewable projects are in remote areas, far from urban demand centres and hyperscale data facilities. Without major investments in long-distance transmission lines, this clean energy cannot be effectively utilised where it is needed most. Moreover, energy storage, demand response and distributed energy resources (DERs) still require further integration and regulatory standardisation to contribute meaningfully to grid stability on a national scale.

Federal and State Policy Responses and Regulatory Developments

Recognising the magnitude of the issue, the Department of Energy (DOE) and the Federal Energy Regulatory Commission (FERC) are actively working to develop policies that:

  • incentivise clean energy deployment to meet the needs of AI and other data centres;
  • promote the build-out of transmission infrastructure, particularly to connect remote renewable resources to urban demand centres;
  • encourage public-private partnerships towards grid modernisation; and
  • advance energy efficiency research to offset rising demands (ie, the Trump administration’s efforts to curtail the DOE’s and other agencies’ priorities).

These policies are still in development, and their successful implementation will be critical to ensuring that AI innovation does not come at the cost of national grid stability or climate commitments.

Charting a Path Forwards in the Age of AI and Energy Transformation

The transformative potential of AI is no longer speculative; it is an active force reshaping industry, economies, and the way people live and work. From personalised medicine and autonomous vehicles to generative language models and real-time decision-making systems, AI is rapidly becoming the backbone of 21st-century progress. However, this transformation carries significant and immediate infrastructure consequences, particularly with respect to energy systems that were never designed to support the massive, continuous and geographically concentrated power demands of AI workloads. The United States now finds itself at a critical inflection point. On one side lies a wave of innovation that promises historic gains in productivity, creativity and economic growth. On the other is a power grid that, while robust by historical standards, is increasingly strained, fragmented and outdated, vulnerable to both sudden demand surges and climate-induced disruptions. The collision between AI-driven digital expansion and lagging energy infrastructure has already begun, and if left unaddressed the consequences will ripple across multiple domains.

Risks of inaction

Without timely and strategic intervention, the continued unchecked growth of AI data centres could: 

  • overload transmission systems, especially in regions where power generation is distant from population and industrial centres;
  • drive up electricity prices for consumers and small businesses as utilities scramble to meet demand through high-cost, short-term solutions;
  • complicate decarbonisation goals by increasing reliance on fossil fuels to provide firm power and stabilise the grid;
  • reduce reliability, with an increased risk of rolling blackouts, service interruptions and grid instability, particularly during extreme weather events; and
  • undermine public equity, as the benefits of AI are captured by large private entities while the costs (financial and environmental) are socialised.

The success of the AI revolution depends not only on technological breakthroughs but on energy infrastructure that can sustain it, both affordably and responsibly.

Required strategic investments and innovations

This moment demands a multifaceted and initiative-taking approach, built around investment, policy reform and innovation. Key priorities should include:

  • accelerating grid modernisation, which means upgrading aging transmission and distribution infrastructure, expanding interregional connectivity, and adopting smart grid technologies to handle dynamic load patterns;
  • energy storage and load flexibility by investing in large-scale, long-duration energy storage solutions that can supplement renewables and reduce dependence on fossil fuels;
  • encouraging AI data centre operators to implement demand response and time-shifting strategies to align workloads with renewable availability;
  • encouraging DERs that include decentralised generation models, such as rooftop solar, microgrids and community energy systems that reduce pressure on centralised infrastructure;
  • advancing forecasting and AI-enabled grid management that leverages AI itself to optimise grid operations, forecast load and generation, and pre-emptively address congestion or reliability risks;
  • promoting policy and regulatory alignment by streamlining permitting processes for new transmission lines and generation assets, modernising interconnection rules, and ensuring that energy planning frameworks reflect the scale and urgency of digital demand growth;
  • promoting public-private partnerships that foster collaboration between tech companies, utilities and governments, to co-invest in sustainable infrastructure and ensure that benefits are equitably distributed; and
  • addressing the sustainability and water-energy nexus by designing data centres with efficient cooling technologies and minimal water footprints, particularly in water-stressed regions.

Collaborative action will be necessary as no single actor – be it government, industry or civil society – can address these challenges alone. The major AI players must be transparent about their energy usage, prioritise siting in regions where grid capacity is available or expandable, and contribute to local infrastructure and climate resilience. Utilities and energy planners must anticipate AI-driven growth in their load forecasting and resource planning. Policymakers must ensure that climate, economic development and technology strategies are integrated rather than siloed. 

Kilpatrick Townsend & Stockton LLP

1420 Fifth Avenue
Suite 3700
Seattle, WA 98101
USA

+1 206 626 7726

+1 206 374 8224

JFPierce@ktslaw.com ktslaw.com
Author Business Card

Trends and Developments

Authors



Kilpatrick Townsend & Stockton understands the legal and business objectives of its clients both in traditional and in renewable energy sectors. Whether requiring assistance with project finance, seeking energy investment opportunities or evaluating energy procurement strategies, Kilpatrick is equipped to meet clients’ needs in the energy sector. The firm collaborates with clients who shape the future of clean energy, including in production, storage solutions, sustainable fuels and carbon recapture. Simultaneously, it engages in the oil and gas sector, acknowledging its essential role in the transition to renewable energy. Kilpatrick’s approach is a blend of strategic thinking, meticulous research and innovative problem-solving, all aimed at delivering exceptional results. It serves a wide range of clients across the industry, including organisations developing or financing innovative projects in solar, biomass, wind, clean fuels, nuclear and energy storage. The firm offers not just legal practitioners but also partners in its clients’ pursuit of a sustainable and prosperous future.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.