Create a dashboard to visualize MR review metrics

Context

Part of MR Cycle Time Track: Data Observability (&16185 - closed).

Goals

  • Setup visibility to developer's experience in Merge Requests
  • Identify an ideal code review response time target

Expected Outcomes

  • Have all of the dashboards created for target and supporting metrics
  • We have a Data-driven prioritization framework to improve Merge Request Review Time
  • We know what our ideal response time target is. This target will be used to plan for the necessary incentives and toolings to encourage reviews.

List of metrics

From &16028:

Target Metrics

  • Percentiles total time waiting for reviews in MRs

Supporting Metrics

  • First review assignment time: From MR creation until 1st review assignment
  • Reviewer First Engagement Time
  • Approval time From review assignment to approval (ideally, for each reviews)
  • From last approval to "set to auto-merge"
  • From last approval to merge
  • Number of reviews per MR (only for reviews that used the MR review feature, ending in "request changes" or "approved")
  • Number of approval resets per MR
  • Time from approval to next review request
  • Number of MR reviews NOT using the MR review feature
  • Number of reviewers per region (AMER/EMEA/APAC)
    • Number of reviewers per timezone?
  • Number of maintainers per region (AMER/EMEA/APAC)
    • Number of maintainers per timezone?

Dependencies / Blockers / Challenges

  1. As discussed in &16028 (comment 2255745743), it's currently hard to know when a MR review ended.
    • If people used the MR review feature, we could tell when a review ended if the reviewer chose "Approved" or "Request changes", but NOT when the user leaves comments. If the reviewer didn't use the MR review feature, we can only know the review ended if they approved the MR.
      • Proposal 1: Create an issue for product team responsible for the MR review feature to add a "marker" when reviewers finished a review, but only left comments.
      • Proposal 2: Nudge/Educate Engineers to use the MR review feature (triage-ops notifications)
  2. We might discover that Snowflake doesn't have all the data we need (I did some crude POCs to verify we had the basics available)
  3. It would probably really useful in the near future to have access to data related to reviewers (i.e. data present in https://gitlab-org.gitlab.io/gitlab-roulette/, but a time series with historical data). This would help us understand some review patterns, and how we could improve review cycle time.
Edited by David Dieulivol