Display extra metadata (user agent, display port size, etc.) on screenshot tab
Problem to Solve
Sasha and the team have worked to create browser tests using Selenium and Webdriver.io. Today those tests capture a screenshot as an artifact at failure to help troubleshoot the test issue. Sasha has to dig through the logs and screenshots stored as artifacts to find the failures which takes too long and is done outside of the GitLab UI.
If we were able to simplify this workflow for users with junit output + screenshots by showing the error and resulting screenshot in the context of the Merge Request it would have a big impact on how long it takes to fix failing tests and builds and speed up their automated system testing.
CI Views gitlab#18850 allow jobs to create artifacts that are specifically viewed based on their definition.
Webdriver.io is only called out as an example tech above. Several different test frameworks provide output with reference to captured screenshots on errors. This issue aims to implement a specific view for contents of screenshotPath and the corresponding junit failures wether that is coming from webdriver/selenium (the GitLab case), cucumber or cypress, all of which provide junit output.
For the purposes of this MVC and in the spirit of dog-fooding the use cases we are building for are those of the internal stakeholder(s) at GitLab who are having this issue. For those stake holders a single failure per test is expected and no screenshot will be captured for errors in tests. We will be strict about input of the junit to match error to screenshot.
Zeff mentioned we may want to create an rspec custom formatter that generates standardized junit we can consume for this feature which we can share with the wider community.
Some specific use or corner cases to take into account
- The screenshot cannot be found (expired, removed, never generated)
- The error log is massive like pages
Gather up the junit data and the corresponding screenshot (one per error for the MVC) and present them in a way that failures in the junit and the corresponding single screenshot can be presented side by side on the job view. We will display one screenshot per test as we expect one failure per test.
An initial suggestion for implementation was "The artifacts/reports are generated by a seamless Selenium proxy: https://gitlab.com/gitlab-org/gitlab-selenium-server. The user just points their tests at the GitLab Selenium proxy and it forwards onto their normal Selenium server generating the necessary data in the middle."
Some mockups/screenshots are available in https://gitlab.com/gitlab-org/gitlab-ce/issues/35379#note_38913392.
The final designs are accessible in the Design Tab. The designs show metadata from the test that is being considered stretch for this MVC. There is another issue to add these to the feature at a later time.
Users will create a pipeline job to run the tests, and they mark the artifacts to be viewed as "Selenium" artifacts. The job will produce metadata and screenshots as artifacts.
When opening the job details view, instead of the standard "black terminal" view, we want to show this specific view. "Raw" view will be available as a secondary tab (or equivalent). Lazy loading of images to the page is useful before all images are loaded (the use case for 100's or 1000's of tests in a suite) would be nice to have.
The latest designs are in the Design tab.
Permissions and Security
- editing the gitlab-ci.yml should follow project guidelines, this requires no special permissions
- viewing and downloading the data requires no special permissions
This will need a new page in the docs to instruct users:
- How to create junit xml that is detected and how to setup artifacts that are discoverable by the feature
- Where to see the report
- How long the artifacts persist
- Common troubleshooting tips if the report fails (like if you need two MRs before the report appears, etc.)
- test for performance with many (1000's) of screenshots
- test for junit is present but screenshot is missing
What does success look like, and how can we measure that?
Make sure these are completed before closing the issue, with a link to the relevant commit.
What is the type of buyer?
The buyer for this is a manager who is trying to make life easier for a team who is doing some automated browser testing and having issues with identifying why those tests are failing.