Measuring Usage and Importance of CPT on MRs

Background

The Component Performance Testing (CPT) tool is a self-service performance testing framework that integrates with CI/CD pipelines to help teams catch performance regressions early. As we continue to evolve and improve this tool, we need to better understand how to measure its usage, adoption, and business impact.

Objective

Currently CPT runs performance testing on MRs and provides results on MR.

e.g. of a report gitlab-org/gitlab!201559 (comment 2691447406)

As a part of this issue, we want to measure how useful this report is and its impact. Using surveys would be an option but there would be other better approaches which are more user friendly and doesn't require a lot of engagement from the user. This issue is to identify various approach that teams take to measure the success of the tool they've created.

Potential approach to consider

  • Creating a survey for gathering metrics - as per #86 (comment 2713429553)
  • Label the MR on performance degradation and also label it once the MR performance has improved - as per #86 (comment 2713429553)
  • Having a 👍 and 👎 on the reporting comment for quick interaction
  • Having a flag [-1, 0, 1] for [improved performance, consistent performance, degraded performance]
  • Tracking number of commits between a performance degradation posted and normal performance posted.

Questions for Discussion

  • What strategies do you use to quantify business impact of the tool you created?
  • How do you track adoption metrics (users, usage frequency, integrations)?
  • What data collection methods have proven effective?
Edited by Vishal Patel