The electricity consumption of AI workloads has become impossible to ignore. A single large language model training run can consume tens of megawatt-hours. Inference at scale, while less intense per query, adds up quickly when multiplied across millions of daily requests. The result is a new class of electricity consumer that is growing faster than almost anything the grid has seen before.
This has obvious implications for generation capacity and grid planning. But the less discussed angle is what it means for energy storage, and specifically for the commercial case behind grid-scale batteries.
The load profile problem
Traditional datacenters draw relatively flat load. Servers run around the clock, cooling systems modulate with outdoor temperature, but the overall demand curve is predictable and steady. Grid operators like flat load because it is easy to plan around.
AI workloads are different. Training runs create bursty, high-power demand that can ramp up and down over hours or days as jobs start and finish. GPU clusters operating at full utilisation draw significantly more power than idle ones, and the ratio between peak and idle is larger than with conventional compute. This creates a load shape that looks less like a traditional datacenter and more like an industrial process with variable duty cycles.
For grid operators, this variability is a headache. For battery storage developers, it is an opportunity.
Behind the meter or in front of it
There are two ways storage can play into this. The first is behind-the-meter systems co-located with the datacenter itself. These can shave demand peaks, reduce exposure to demand charges (which are based on the highest 15 or 30 minute power draw in a billing period), and provide local power quality support. For a hyperscaler paying industrial electricity rates, reducing the peak demand charge by even a few megawatts can translate into substantial annual savings.
The second is grid-side storage that helps manage the upstream impact of these new loads. As AI datacenters cluster in specific regions (often near cheap power or fibre interconnects), they can create localised grid stress. Storage deployed at the transmission or distribution level can help manage congestion and defer expensive network reinforcement.
Both of these use cases strengthen the revenue case for battery storage, but they require different commercial structures and different technical specifications.
The interconnection bottleneck
Perhaps the most immediate impact of AI datacenter growth on the storage market is the competition for grid connections. In the UK and across Europe, the queue for new grid connections is already years long. Large datacenters are now competing with renewable generation and storage projects for the same limited transmission capacity.
This has a couple of effects. It increases the value of existing grid connections, which benefits storage projects that are already in the queue or operational. It also creates an incentive for co-location, where a storage asset and a datacenter share a grid connection and the storage system helps manage the combined load to stay within the connection’s rated capacity.
Not a simple bullish story
It is tempting to see AI datacenter growth as straightforwardly good news for battery storage. More demand, more variability, more need for flexibility. And at a high level, that is true. But the details matter.
AI operators care about reliability above almost everything else. A training run interrupted by a power event can lose days of compute. This means that behind-the-meter storage at AI facilities needs to meet very high availability and response time standards. The technical bar is higher than for a standard commercial behind-the-meter installation.
There is also the question of duration. Most grid-scale battery systems today are configured for one to two hours of storage. AI datacenter applications may need different configurations depending on whether the use case is peak shaving (short duration, high power) or backup and ride-through (longer duration, moderate power). This pushes the market towards a wider range of storage products rather than the one-size-fits-all approach that has dominated so far.
Where the opportunity sits
The intersection of AI infrastructure and energy storage is still in its early stages. Most of the analysis I see treats them as separate markets. But the companies and investors who understand both the power engineering and the commercial dynamics are going to be best positioned as these two sectors converge.
Understanding the load characteristics of GPU clusters, the economics of grid connections, and the technical requirements for high-reliability storage is not a common combination of knowledge. That is exactly what makes it valuable.