Measure DORA 4 Metrics in GitLab
Problem to solve
Customer experience is becoming a key metric. Users are looking for the ability to not just measure platform stability and other performance KPIs post-deployment but also want to set targets for customer behavior, experience, and financial impact. Tracking and measuring this indicators after deployment solves an important pain point. In a similar fashion, creating views which are managing products not projects or repos will provide users with a more relevant set of data.
There are four main metrics that the industry is currently talking about:
Deployment frequency - how often does code get pushed production (how many times a day/week/month)
Lead time for changes - how long does it take for code to be committed and reach production
Time to restore service - how long does it generally take to restore service when a service incident or a defect that impacts users occurs (can be rollback or time to solve a specific bug)
Change failure rate - what percentage of changes to production or released to users result in degraded service (generally requiring a rollback or hotfix/patch)
In the future the dashboards will be consolidated under the analytics dashboard as follows:
We should provide a dashboard that shows these metrics and allows to drill down to understand what specific issue caused the outage/long delay to production etc.
The dashboard shall be called:
The dashboard should be a sub dashboard under the existing metrics dashboard but should also get a menu option under operations which will open the previous sub menu
We will place the new charts under:
First iteration will include only
- Deployment frequency - how often does code get pushed production (how many times a day/week/month)
Permissions and Security
Draft documentation from @djensen:
What is DORA?
DORA stands for "DevOps Research and Assessment". DORA is a research organization that has been studying and reporting on DevOps best practices since 2013. It was acquired by Google in 2018. Its current home is within Google Cloud DevOps.
What are DORA metrics?
The phrase "DORA metrics" refers to DORA's 4 "core metrics — commit-to-prod lead time, deployment frequency, change failure rate and mean time to restore" (TheNewStack).
- Throughput performance metrics
- Deployment Frequency (DF). Defines how often your organization deploys code to production. Elite performers deploy on-demand multiple times a day. [Measured with Deployments to Production]
- Lead Time for Changes / Mean Lead Time (MLT). Defines how long it takes for a code commit to be deployed to production. Elite performers have a lead time of less than an hour. [Measure with time from commit/merge to deployment to Production]
- Stability performance metrics
- Time to restore service / Mean Time To Recover (MTTR). Defines how long it takes to restore the service after a service incident occurred. Elite performers restore service in less than an hour. 
- Change Failure Rate (CFR). Defines what percentage of changes result either in degraded service or subsequently require remediation (e.g. leads to impairment or outage, requires hotfix, rollback, fix forward). Elite performance have a change failure rate between 0% and 15%."
These metrics were popularized by the founder of DORA in her book Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations (Amazon).
Why care about DORA metrics?
"Many EDs mentioned DORA standards." - 12.5 Analytics Research
"DORA metrics are a result of [at least] six years worth of surveys conducted by the DevOps Research and Assessments (DORA) team ... These metrics guide determine how successful a company is at DevOps". CloudBees
"Compared to teams in the low-performing group, these [DORA metric] elite software teams:
- Execute 208 times as many code deployments
- Maintain lead times, from commit to deploy, that are 106 times faster
- Report change failure rates that are 7 times lower
- Recover from change failures 2,604 times faster
As important as these advantages are, here’s an even more impressive metric: elite performers are twice as likely as low-performing teams to meet or exceed their organizational performance goals." NewRelic
How to measure DORA metrics in GitLab?
As noted by ThoughtWorks, "A good place to start is to instrument the build pipelines so you can capture the four key metrics and make the software delivery value stream visible. GoCD pipelines, for example, provide the ability to measure these four key metrics as a first-class citizen of the GoCD analytics." Here is how GitLab measures the 4 DORA core metrics:
- Deployment Frequency (DF).
- Measure: Number of Deployments to production Environment per day.
- Lead Time for Changes / Mean Lead Time (MLT).
- Measure: Average time between "commit added to default branch" and "commit deployed to production Environment"
- Question: Exclude commits "internal" to an MR? For example, if an MR has 3 commits, should we only consider the last commit (whether ordinary or merge commit)?
- Time to restore service / Mean Time To Recover (MTTR).
- Measure: ?
- Change Failure Rate (CFR).
- Measure: Number of "bug" tickets versus other Issues?
- Question: What about bugs that are fixed without an Issue being created?
- Question: How do we identify "bug" issues? Ask the user to submit a label?
- Question: Bugs generally lag features. Should we measure "this month's bugs" against "last month's non-bugs".