Screenshots and trace on the screenshot tab on the pipeline page
Problem to Solve
Sasha and the team have worked to create browser tests using Selenium and Webdriver.io. Today those tests capture a screenshot as an artifact at failure to help troubleshoot the test issue. Sasha has to dig through the logs and screenshots stored as artifacts to find the failures which takes too long and is done outside of the GitLab UI.
If we were able to simplify this workflow for users with junit output + screenshots by showing the error and resulting screenshot in the context of the Merge Request it would have a big impact on how long it takes to fix failing tests and builds and speed up their automated system testing.
This is an iteration on [#216979] that is the first step.
Gather up the junit data and the corresponding screenshot (one per error for the MVC) and present them in a way that failures in the junit and the corresponding single screenshot can be presented side by side on the job view. We will display one screenshot per test as we expect one failure per test.
The final designs are accessible in the Design Tab. The designs show metadata from the test that is being considered stretch for this MVC. There is another issue to add these to the feature at a later time.
Users will create a pipeline job to run the tests, and they mark the artifacts to be viewed as "Selenium" artifacts. The job will produce metadata and screenshots as artifacts.
When opening the job details view, instead of the standard "black terminal" view, we want to show this specific view. "Raw" view will be available as a secondary tab (or equivalent). Lazy loading of images to the page is useful before all images are loaded (the use case for 100's or 1000's of tests in a suite) would be nice to have.
The latest designs are in the Design tab of [#6061]
Permissions and Security
- editing the gitlab-ci.yml should follow project guidelines, this requires no special permissions
- viewing and downloading the data requires no special permissions
This will need a new page in the docs to instruct users:
- How to create junit xml that is detected and how to setup artifacts that are discoverable by the feature
- Where to see the report
- How long the artifacts persist
- Common troubleshooting tips if the report fails (like if you need two MRs before the report appears, etc.)
- test for performance with many (1000's) of screenshots
- test for junit is present but screenshot is missing
What does success look like, and how can we measure that?
What is the type of buyer?
The buyer for this is a manager who is trying to make life easier for a team who is doing some automated browser testing and having issues with identifying why those tests are failing.