DORA Metrics

Four Key Indicators That Measure Software Delivery Performance

DORA (DevOps Research and Assessment) metrics were identified through years of research as the most reliable predictors of high-performing software teams. They help you measure and improve your development velocity, stability, and reliability.

Deployment Frequency

How often you ship to production

Lead Time for Changes

How fast code goes from commit to production

Change Failure Rate

How often deployments cause problems

Mean Time to Recovery

How fast you fix production issues

Deployment Frequency

How often your team successfully deploys code to production.

Why It Matters

  • Indicates how quickly you can deliver value to users
  • Higher frequency usually means smaller, safer changes
  • Shows team's ability to release reliably
  • Reflects automation maturity

How We Calculate It

Data Source:

CircleCI / GitHub Actions API

Formula:

Count of Successful Production Deployments รท Time Period

Process:
  1. 1 Query CircleCI/GitHub for all pipelines on the main branch
  2. 2 Check if the prod workflow was executed
  3. 3 Count only workflows with status = "success"
  4. 4 Group by time period (day/week/month)

Performance Benchmarks

Elite
Multiple deployments per day
๐Ÿ†
High
Once per day to once per week
โญ
Medium
Once per week to once per month
๐Ÿ“ˆ
Low
Less than once per month
๐Ÿ“Š

Lead Time for Changes

The time it takes for a code commit to reach production.

Why It Matters

  • Shows how fast you can respond to customer needs
  • Indicates efficiency of your CI/CD pipeline
  • Reveals bottlenecks in your delivery process
  • Lower lead time = faster feedback loops

How We Calculate It

Data Source:

GitHub API + CircleCI API

Formula:

Production Deployment Time - Commit Time

Process:
  1. 1 Get commit timestamp from GitHub (when code was committed)
  2. 2 Get deployment timestamp from CircleCI (when it went to production)
  3. 3 Calculate the time difference in hours
  4. 4 Average across all deployments in the time period

Performance Benchmarks

Elite
Less than one day
๐Ÿ†
High
One day to one week
โญ
Medium
One week to one month
๐Ÿ“ˆ
Low
More than one month
๐Ÿ“Š

Change Failure Rate

The percentage of production deployments that result in a failure, requiring a hotfix, rollback, or immediate remediation.

Why It Matters

  • Indicates quality and reliability of releases
  • Shows effectiveness of testing and quality gates
  • Lower rate = more stable deployments
  • Helps balance speed vs. quality

How We Calculate It

Data Source:

CircleCI API

Formula:

(Failed Deployments รท Total Deployments) ร— 100

Process:
  1. 1 Count all production deployment attempts from CircleCI
  2. 2 Identify failed deployments (status = "failed")
  3. 3 Identify successful deployments (status = "success")
  4. 4 Calculate percentage

Performance Benchmarks

Elite
0-5% failure rate
๐Ÿ†
High
5-15% failure rate
โญ
Medium
15-30% failure rate
๐Ÿ“ˆ
Low
More than 30% failure rate
๐Ÿ“Š

Mean Time to Recovery (MTTR)

How long it takes to restore service when a production failure occurs.

Why It Matters

  • Measures resilience and incident response
  • Shows team's ability to handle problems quickly
  • Critical for customer satisfaction
  • Indicates monitoring and alerting effectiveness

How We Calculate It

Data Source:

Jira API

Formula:

Average(Incident Resolution Time - Incident Creation Time)

Process:
  1. 1 Query Jira for high-priority bugs in production
  2. 2 Get creation and resolution timestamps for each bug
  3. 3 Calculate time difference in hours
  4. 4 Average across all incidents in the time period

Performance Benchmarks

Elite
Less than 1 hour
๐Ÿ†
High
1 hour to 1 day
โญ
Medium
1 day to 1 week
๐Ÿ“ˆ
Low
More than 1 week
๐Ÿ“Š

Ready to Track Your Metrics?

Start measuring your team's performance and identify areas for improvement with data-driven insights.

Sign in with Google