Collect 3 MR performance inputs in the handbook
Collect these 3 inputs on MR performance in a handbook page dedicated to MR performance, explaining how to read and interpret them for better decision making.
These inputs help build an holistic picture of how users perceive MR performance.
🤖 Machine-reported
-
GitLab only: We focus on the Largest Contentful Paint (LCP) metric, but should also track these other important metrics. Here are the existing resources that we can link to:
- Performance Overview page
- Test instance performance (test samples: large MR overview and changes tabs, large MR commits tab)
-
GitLab.com performance:
gitlab-foss
large MR overview tab (test sample) -
GitLab.com performance:
gitlab-foss
large MR changes tab (test sample) -
GitLab.com performance:
gitlab-foss
empty MR overview tab (test sample) -
GitLab.com performance:
gitlab
large MR overview tab (test sample) -
GitLab.com performance:
gitlab
small MR overview tab (test sample) - GitLab.com performance: Other project MR overview tab (test sample)
-
Competitors:
- As a boring solution, link to https://forgeperf.org — the test samples don’t represent “large MRs” but the samples are at least consistently applied across tools. We already mention this resource in the Development Performance Indicators.
- Grafana dashboard to compare LCP of GitLab.com versus GitHub.com on key pages.
🏃 Task times
Predict task times for primary MR tasks
👤 Human-reported
Edited by Pedro Moreira da Silva