If you look at the financial model behind most grid-scale battery storage investments, you will find a line somewhere that says something like “2% annual capacity degradation.” It is a clean number. It makes the spreadsheet easy to build. And it is, at best, a rough approximation of what actually happens inside the cells.

This matters because storage assets are increasingly being valued on the basis of long-duration revenue projections. A 20-year DCF for a 100MW battery system is extremely sensitive to assumptions about how much usable capacity remains in year 10 or year 15. Get the degradation curve wrong and you misvalue the asset. Sometimes significantly.

The problem with linear assumptions

Lithium-ion cells do not degrade linearly. The rate of capacity loss depends on temperature, depth of discharge, charge rate, calendar age, and the interaction between all of these. A cell cycled aggressively for frequency response will age differently to one doing a single daily arbitrage cycle. A system in the Middle East will age differently to one in Scotland, even with identical usage.

The degradation curve is typically steeper in the first year as the solid electrolyte interphase (SEI) layer forms, then flattens out for a period, before accelerating again as the cell approaches end of life. This “knee” in the degradation curve is where a lot of financial models fall apart. If your model assumes linear fade and the real curve has a knee at year 8, you are overvaluing the back half of the asset’s life.

State of health is not a single number

The other issue is that “state of health” as reported by a battery management system is not always what investors think it is. SoH can be defined in terms of capacity (how many ampere-hours the cell can still deliver) or in terms of resistance (how much internal impedance has increased). These two measures do not always move in lockstep.

A cell might retain 90% of its original capacity but have significantly increased internal resistance, which limits the power it can deliver at peak rates. For a storage asset whose revenue depends on responding to grid signals within milliseconds, the power capability matters as much as the energy capacity. A financial model that only tracks capacity fade will miss this.

What better modelling looks like

There is no single correct degradation model, but there are better approaches than a flat annual percentage. Semi-empirical models that account for cycling depth, temperature, and C-rate can capture the shape of the degradation curve more accurately. Some operators are now using equivalent full cycle counting combined with cell-level telemetry data to build asset-specific degradation forecasts rather than relying on generic manufacturer warranties.

From a valuation perspective, the key is sensitivity analysis. If your base case assumes 2% linear fade, what happens to the IRR if degradation follows an exponential curve with a knee at year 8? What if the knee comes at year 6? These scenarios are not unlikely, and the difference in NPV can be material.

Why this matters now

As the grid-scale storage market matures, assets are being traded on secondary markets. Buyers need to understand what they are actually purchasing, and that means looking beyond nameplate capacity and warranty documents. The gap between a naive degradation assumption and a physics-informed one can represent millions of pounds in mispriced value over a 20-year asset life.

For anyone working in energy finance, understanding the engineering behind degradation is not optional. It is where the edge is.