Incorporate feedback from QA on Test Sessions

Problem to Solve

When test sessions are run during a user's pipelines, there is currently no native way to get the overall status of the test cases performed, including their pass / fail criteria.

Ideally, release managers and quality engineers would want to collaborate on the release by discussing the testing performed. This collaboration should ideally occur on a public facing issue, and would not need to contain the full output of each testing but rather the high level overview of test pass / fail criteria.

The workaround

The QA Team initially designed an issue template to aggregate information together to investigate their test cases.

From there they further refined it by using a triage bot @gitlab-qa to automatically generate these reports (example issue) at the end of a deployment using a job

Test Session Anatomy

  • Each represents an deployment environments
    • There can be around 120 reports (4 environments x 30 reports) everyday, given how frequently GitLab does deploys
  • List of test cases and their result (failed matters the most)
  • QA needs a way to test cases off that are not a concern or known

Something to consider

GitLab is doing this in an automated way, but there are definitely other companies out there that would want to build test sessions manually, likely through the UI. Just giving an API to create manual mappings first the boring solution.

Diverge on solutions

  • Find a way to scale the job that @gitlab-qa runs at the end of a deployment for general use
  • Create an API that allows a test session report to be added as an issue
  • Mimic the milestone list view to represent test sessions, but instead of linking to a filtered issues list it shows the list of test cases

Resources

Test Sessions comparison

Screen_Shot_2020-11-20_at_12.07.48_PM

Screen_Shot_2020-11-20_at_12.11.26_PM

Edited by Austin Regnery