DORA metrics

Four research-backed metrics that measure software delivery performance and team health.

DORA metrics — named after the DevOps Research and Assessment program — are four measures that have emerged from large-scale research as reliable signals of software delivery performance: deployment frequency, lead time for changes, change failure rate, and mean time to restore.

What makes them useful is that they’re outcome-based rather than activity-based. They measure what actually matters — how fast you ship, how stable what you ship is, and how quickly you recover when something breaks — rather than proxies like lines of code written or tickets closed.

The research behind DORA consistently finds that high-performing teams aren’t trading stability for speed. They deploy more frequently and have lower failure rates. This challenges the intuition that moving faster means breaking more things. The teams that invest in the practices that enable rapid, safe delivery — automated testing, trunk-based development, deployment pipelines, good observability — get both.

The metrics are a diagnostic, not a target. Optimizing for the numbers directly produces the same problem as any goodhart’s-law trap: teams that deploy trivial changes to inflate frequency, or suppress incident reports to keep MTTR low. The value is in using the metrics to identify systemic bottlenecks — long lead times usually point to batch size or review process issues; high change failure rate usually points to insufficient testing or unclear ownership.

Used honestly, DORA metrics give engineering leaders a shared vocabulary for talking about delivery health without conflating it with output.

1 post tagged "DORA metrics"

Mastodon

Follow me on Twitter

I tweet about tech more than I write about it here 😀

Ritesh Shrivastav