Measuring Usage and Importance of CPT on MRs
This epic was created by promoting the issue https://gitlab.com/gitlab-org/quality/component-performance-testing/-/issues/86
### Background
The Component Performance Testing (CPT) tool is a self-service performance testing framework that integrates with CI/CD pipelines to help teams catch performance regressions early. As we continue to evolve and improve this tool, we need to better understand how to measure its usage, adoption, and business impact.
### Objective
Currently CPT runs performance testing on MRs and provides results on MR.
e.g. of a report https://gitlab.com/gitlab-org/gitlab/-/merge_requests/201559#note_2691447406
As a part of this issue, we want to measure how useful this report is and its impact. Using surveys would be an option but there would be other better approaches which are more user friendly and doesn't require a lot of engagement from the user. This issue is to identify various approach that teams take to measure the success of the tool they've created.
## Potential approach to consider
* ~~Creating a survey for gathering metrics~~ - as per https://gitlab.com/gitlab-org/quality/component-performance-testing/-/issues/86#note_2713429553
* Label the MR on performance degradation and also label it once the MR performance has improved - as per https://gitlab.com/gitlab-org/quality/component-performance-testing/-/issues/86#note_2713429553
* Having a :thumbsup: and :thumbsdown: on the reporting comment for quick interaction
* Having a flag \[-1, 0, 1\] for \[improved performance, consistent performance, degraded performance\]
* Tracking number of commits between a performance degradation posted and normal performance posted.
### Questions for Discussion
* What strategies do you use to quantify business impact of the tool you created?
* How do you track adoption metrics (users, usage frequency, integrations)?
* What data collection methods have proven effective?
### DRi
@vishal.s.patel
### Status
<!-- STATUS NOTE START -->
## Status 2026-01-21
Following the feedback from ~"group::code review" team, we need to track if the code changes in MR diff are actually in the execution path of the api under test. If not then we can mention that in the report. This would make the report more accurate and not cause false negatives/positives. To do this we need to integrate GitLab MCP with Claude API. Was focusing mainly on that.
:clock1: **total hours spent this week by all contributors**: 25h
:tada: **achievements**:
- Working on automating the Oauth token generation to integrate MCP server with Claude. Got a bash script which is running successfully.
- Also working on prompt engineering by creating prompts which will be used with MCP server, to identify if the code diff in MR would get executed when the API is tested.
* This could also be used in the future to do selective test execution. Run test only if the code MR changes fall under the execution path of the API under test, else don't run it.
:issue-blocked: **blockers**:
- DRI this week
- Also blocked from IT for [**Service Account: Gmail \<\> Claude, GitLab**](https://gitlab.com/gitlab-com/gl-security/corp/issue-tracker/-/work_items/3608#top "Service Account: Gmail <> Claude, GitLab")**. This service accout will be used for MCP server integration.**
* But proceeding currently locally with my account and testing out code.
:arrow_forward: **next**:
- Continue working on the script for generating Oauth and also testing prompts locally.
_Copied from https://gitlab.com/groups/gitlab-org/quality/-/epics/298#note_3021859960_
<!-- STATUS NOTE END -->
epic