DORA Metrics
Four Key Indicators That Measure Software Delivery Performance
DORA (DevOps Research and Assessment) metrics were identified through years of research as the most reliable predictors of high-performing software teams. They help you measure and improve your development velocity, stability, and reliability.
Deployment Frequency
How often you ship to production
Lead Time for Changes
How fast code goes from commit to production
Change Failure Rate
How often deployments cause problems
Mean Time to Recovery
How fast you fix production issues
Deployment Frequency
How often your team successfully deploys code to production.
Why It Matters
- Indicates how quickly you can deliver value to users
- Higher frequency usually means smaller, safer changes
- Shows team's ability to release reliably
- Reflects automation maturity
How We Calculate It
CircleCI / GitHub Actions API
Count of Successful Production Deployments รท Time Period
- 1 Query CircleCI/GitHub for all pipelines on the main branch
- 2 Check if the prod workflow was executed
- 3 Count only workflows with status = "success"
- 4 Group by time period (day/week/month)
Performance Benchmarks
Lead Time for Changes
The time it takes for a code commit to reach production.
Why It Matters
- Shows how fast you can respond to customer needs
- Indicates efficiency of your CI/CD pipeline
- Reveals bottlenecks in your delivery process
- Lower lead time = faster feedback loops
How We Calculate It
GitHub API + CircleCI API
Production Deployment Time - Commit Time
- 1 Get commit timestamp from GitHub (when code was committed)
- 2 Get deployment timestamp from CircleCI (when it went to production)
- 3 Calculate the time difference in hours
- 4 Average across all deployments in the time period
Performance Benchmarks
Change Failure Rate
The percentage of production deployments that result in a failure, requiring a hotfix, rollback, or immediate remediation.
Why It Matters
- Indicates quality and reliability of releases
- Shows effectiveness of testing and quality gates
- Lower rate = more stable deployments
- Helps balance speed vs. quality
How We Calculate It
CircleCI API
(Failed Deployments รท Total Deployments) ร 100
- 1 Count all production deployment attempts from CircleCI
- 2 Identify failed deployments (status = "failed")
- 3 Identify successful deployments (status = "success")
- 4 Calculate percentage
Performance Benchmarks
Mean Time to Recovery (MTTR)
How long it takes to restore service when a production failure occurs.
Why It Matters
- Measures resilience and incident response
- Shows team's ability to handle problems quickly
- Critical for customer satisfaction
- Indicates monitoring and alerting effectiveness
How We Calculate It
Jira API
Average(Incident Resolution Time - Incident Creation Time)
- 1 Query Jira for high-priority bugs in production
- 2 Get creation and resolution timestamps for each bug
- 3 Calculate time difference in hours
- 4 Average across all incidents in the time period
Performance Benchmarks
Ready to Track Your Metrics?
Start measuring your team's performance and identify areas for improvement with data-driven insights.
Sign in with Google