Skip to content

Process to discover Predictive Test Selection gaps

Context

Closes #465 (closed)

I think the fastest way to know whether our Predictive Test Selection solution is working well is two-fold:

  1. Look at the overall numbers (that include false positives like flaky tests/infra failures on the full pipelines)
  2. Look at the specific transitions regularly to find real test selection gaps in MRs. I think this will give us insights and validate the overall numbers.
    • If we have a lot of false positives (i.e. a lot of flakiness/infra failures on full backend pipelines), we'll know the overall numbers we'll see are a very pessimistic number, and that the real accuracy would be higher.
    • If we have a lot of true positives, we'll have test-selection-gaprspec to fix!

What's in this MR?

As a result of this MR, we also have a first investigation issue following the process above, and an EPIC to regroup newly found test selection gaps (part of the process).

Edited by David Dieulivol

Merge request reports