VelocinatorVelocinator
Engineering Metrics7 min read

Accelerate and DORA Metrics: The Research Behind Elite Software Delivery

April 9, 2026
Accelerate and DORA Metrics: The Research Behind Elite Software Delivery — Engineering Metrics article on engineering productivity

The DORA metrics aren't a framework someone invented in a blog post. They're the output of six years of rigorous research involving tens of thousands of engineering professionals. Understanding where they came from helps explain why they're worth taking seriously.

The Research Program

In 2013, a team led by Nicole Forsgren (then a professor of information systems) began what would become the most comprehensive study of software delivery performance ever conducted. The goal: identify what actually predicts high performance in software teams, not what consultants assumed predicted it.

The annual State of DevOps Report published findings each year. By 2018, the findings were formalized in the book Accelerate: The Science of Lean Software and DevOps, co-authored by Forsgren, Jez Humble (Continuous Delivery), and Gene Kim (The Phoenix Project).

The DevOps Research and Assessment (DORA) team was later acquired by Google, which continues the annual research program today at dora.dev.

What the Research Found

The central finding was uncomfortable for engineering culture at the time: speed and stability are not in opposition. Elite teams shipped faster and had fewer failures. Low performers, by contrast, shipped slowly and had more failures.

The mechanism: high performers invest in automation, testing, and deployment safety practices. This makes shipping fast and low-risk simultaneously. Low performers under-invest, making each deploy a high-stakes event that teams avoid—leading to infrequent, large, risky releases.

This reversed the conventional wisdom that "move slow, don't break things" was the safe path.

The Four Metrics

The research identified four metrics that both measure software delivery performance and predict it. They're split across two dimensions: throughput and stability.

Throughput Metrics

Deployment Frequency — How often does your team deploy to production?

The research found elite performers deploy on-demand, multiple times per day. Low performers deploy between once a month and once every six months.

This is the clearest signal of whether your delivery pipeline is healthy. Infrequent deploys mean large batches, which means high risk and slow feedback. Teams that deploy frequently get faster feedback, smaller blast radii when things go wrong, and shorter times to value.

Lead Time for Changes — How long from code committed to code in production?

Elite: under one hour. Low: between one month and six months.

Lead time captures the full delivery pipeline: review, testing, CI/CD, and deployment. A long lead time means slow iteration, slow bug fixes, and a long gap between writing code and learning whether it works in production.

Stability Metrics

Change Failure Rate — What percentage of deployments cause a production incident?

Elite: 0–15%. Low: 46–60%.

This is the quality signal. If most of your deployments go smoothly, you're shipping reliable software. If nearly half degrade the service, something is wrong with testing, review, or deployment practices.

Time to Restore Service (MTTR) — How long to recover from a production incident?

Elite: under one hour. Low: more than one week.

MTTR measures resilience. Failures happen to every team. What separates elite teams is how fast they recover. This reflects observability investment, runbook quality, on-call effectiveness, and deployment practices like feature flags and rollback capabilities.

For a deep dive on MTTR specifically, see our MTTR and incident response guide.

The Cluster Finding

One of the most important findings in Accelerate: teams don't scatter across the performance spectrum. They cluster. The research consistently found roughly four performance tiers—Elite, High, Medium, and Low—with clear thresholds between them.

This means performance isn't a continuous variable you nudge gradually. Teams tend to be in one band and move (or not) to another. It also means the benchmarks are empirically grounded, not arbitrary targets.

What Predicts the Metrics

Accelerate didn't just identify the metrics—it identified what drives them. Key predictors of high DORA performance:

Technical practices:

  • Continuous integration (CI)
  • Trunk-based development (small, frequent merges to main)
  • Continuous delivery (every commit releasable)
  • Test automation
  • Deployment automation
  • Monitoring and observability

Cultural practices:

  • Westrum organizational culture (generative, not bureaucratic)
  • Blameless postmortems
  • Psychological safety to raise concerns

The finding that culture matters as much as tooling was significant. Teams with technically advanced pipelines but blame-heavy cultures underperformed teams with simpler pipelines and healthier cultures.

What the Annual DORA Reports Add

The research has continued every year since, expanding the dataset and refining the model. Notable additions:

  • 2019: Added "reliability" as a fifth metric (SLA/SLO achievement)
  • 2021: Introduced the DORA Quick Check benchmark tool
  • 2022: Added software supply chain security practices
  • 2023: Explored the impact of AI on developer productivity

The core four metrics have remained stable throughout. They continue to be the most validated, widely-used framework for measuring software delivery performance.

Applying the Research

The practical implication of Accelerate is straightforward: measure your four DORA metrics, identify which band you're in, and invest in the practices that move teams between bands.

You can't improve what you don't measure. Most teams that start tracking DORA metrics find their initial numbers worse than expected—which is useful information, not a failure. The baseline is where improvement starts.

Velocinator automates DORA metric tracking by integrating with GitHub and Jira, calculating all four metrics continuously, and displaying them against the DORA benchmark bands from the research. Start with the DORA metrics overview if you're new to the framework, or the DORA metrics analyzer guide if you're evaluating tooling.

Frequently Asked Questions

What is the Accelerate book?
Accelerate: The Science of Lean Software and DevOps (2018) by Nicole Forsgren, Jez Humble, and Gene Kim presents the findings of the DORA research program. It identifies four key metrics—Deployment Frequency, Lead Time for Changes, MTTR, and Change Failure Rate—that predict software delivery performance and organizational outcomes.
Who created the DORA metrics?
The DORA metrics were developed by the DevOps Research and Assessment (DORA) team, primarily Nicole Forsgren (CEO and Chief Scientist), Jez Humble, and Gene Kim. The research ran from 2013 onward and was eventually acquired by Google in 2018.
What does 'elite' performance mean in DORA research?
Elite performers deploy on-demand (multiple times per day), have lead times under one hour, restore service in under one hour, and maintain a change failure rate of 0–15%. The research found that elite teams are not just faster—they're also more stable, disproving the speed-vs-stability tradeoff assumption.

More in Engineering Metrics

Continue reading related articles from this category.

DORA Metrics with Atlassian: Tracking Delivery Performance in Jira — Engineering Metrics article on engineering productivity

DORA Metrics with Atlassian: Tracking Delivery Performance in Jira

What Jira covers natively for DORA metrics, where it falls short, and how to get the full picture for your Atlassian-based engineering team.

April 9, 2026
DORA Metrics Analyzer: What It Is and How to Choose One — Engineering Metrics article on engineering productivity

DORA Metrics Analyzer: What It Is and How to Choose One

What a DORA metrics analyzer does, which features matter, and how to evaluate one for your engineering team.

April 9, 2026
Flow Efficiency: Finding the 'Dark Matter' in Your SDLC — Engineering Metrics article on engineering productivity

Flow Efficiency: Finding the 'Dark Matter' in Your SDLC

Your tickets spend 80% of their lifecycle waiting. Here's how to find and eliminate those invisible delays.

February 4, 2026

Enjoyed this article?

Start measuring your own engineering velocity today.

Start Free Trial