UX Scorecard Part 1: Understand pipeline health to resolve and prevent issues from occurring
Summary
JTBD | Personas | Benchmark score | Previous benchmark score |
---|---|---|---|
When I am managing continuous integration of code at scale, I want to understand the pipeline health, so I can successfully resolve and prevent issues from occurring. | Sasha, Software Developer, Ingrid, Infrastructure Operator | C - Average (3.0) | N/A |
✨ Insights (see more in Dovetail project)
Users are confused by Project CI/CD Analytics causing low trust in the data they locate there.
Details
- Users were confused by the data visualizations and meaning when viewing metrics in CI/CD analytics. They did not trust the metrics enough, and checked the accuracy by calculating the metrics on the pipelines list page (for example, pipeline failure rate).
-
🕊 Evidence
There is no connection between CI/CD Project Analytics and the Build menu today.
Details
- Users wondered if other pages in GitLab contained CI/CD metrics and were searching around for them. This is in part because there is no connection between the analytics page and the pipelines and jobs pages today.
-
🕊 Evidence
It is extremely difficult to locate anomalies in pipelines today.
Details
- Users cannot quickly locate specific runs of pipelines or jobs that are “not normal” (duration or status was different than usual). This impacts their ability to optimize the pipeline because they must first pinpoint those anomalies themselves.
-
🕊 Evidence
Key CI insights are missing from CI/CD Project analytics today causing users to come up with workarounds.
Details
- There are a number of metrics that are missing from CI/CD Project Analytics today that make it difficult for users to compete their main job when optimizing pipelines. These metrics include:
- Average pipeline duration
- Average job duration (for each job)
- Pipeline failure rate
- Job failure rate (for each job)
- Pipeline duration history (larger than last 30 commits)
-
🕊 Evidence
There is no quick way to test GitLab CI today after making changes.
Details
- The only workaround to test CI today is to wait for a pipeline, or many pipelines, to run after making changes to the yaml file. This is a slow process for users, and depending on how many pipelines they feel they need to run after making changes, it can take hours.
-
🕊 Evidence
🔗 Resources
- Walkthrough video (22 min - watch at 2x speed)
- Recommendations issue
- Journey map
- Dovetail project
- Scoring sheet
Formative evaluation session details
Recruitment
3-5 GitLab team members who have experience with pipeline configuration and basic shell scripting.
Scenario
You want to examine pipeline performance on Project CI Insights to ensure that code is being executed as quickly as possible. According to new organization standards, jobs should not take longer than 5 minutes to run.
Tasks:
- Determine what the average pipeline duration is.
- Determine what the average duration is for each job in the pipeline and if they are in expected ranges (less than 5 minutes) according to organization standards.
- View the overall failure rate for the pipeline.
- Locate the jobs that are failing the most in the pipeline.
- Decide if the information you found can be used to improve the pipeline performance.
- Propose an update to the .gitlab-ci.yml file with the improvements.
- Run a pipeline with the updated configuration to test the performance against the old configuration.
- Were your changes successful?
UX Scorecard Checklist
Learn more about UX Scorecards
-
Add this issue to the stage group epic for the corresponding UX scorecards. Verify that the "UX scorecard" label is applied. -
Work with your PM to identify a top Job to be Done (JTBD). All GitLab JTBD can be found in the jobs-to-be-done.yml file. If creating a new job, write it using the JTBD format: When [situation], I want to [job], so I can [expected outcome]. Review with your manager to ensure your JTBD is written at the appropriate level. Remember, a JTBD is not a user story, it should not directly reference a solution and should be tool agnostic. -
Make note of which personas might be performing the job, and link to them from this issue's description. Keeping personas in mind allows us to make the best decisions to address specific problems and pain points. Note: Do not include a persona in your JTBD format, as multiple types of users may complete the same job. -
If your JTBD spans more than one stage group, that’s great! Review your JTBD with a designer from that stage group for accuracy. -
Add any new JTBD to the SSOT jobs-to-be-done.yml file -
Consider whether you need to include additional scenarios related to onboarding. -
Select the appropriate scorecard approach and evaluate the current experience. -
Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. -
Once testing is complete, create a walkthrough video that documents what you experienced/witnessed within the existing experience. Begin the video with a contextual introduction including: your role, stage group, specify how you acquired the data (ex: internal or external users, or self-heuristic evaluation), and a short introduction to your JTBD and purpose of the UX scorecard. This is not a "how to" video, but instead should help build empathy for users by clearly showing areas of potential frustration and confusion. (You can point out where the experience is positive, too.) At the end of the video, make sure to include narration of the Benchmark Score. Examples here and here. - If you're re-scoring the experience, walkthrough the entire flow again. For narration, you can highlight the recent improvements but still call out any areas that could still use some tweaking (in the next round of iterations, if applicable). The re-score video, in theory, should be shorter since we've hopefully eliminated a few bumps in the user flow.
- The walkthrough video shouldn't take you long to create. Don't worry about it being polished or perfect, it's more important to be informative.
-
Tag PM and UX DRIs for this JTBD in this issue to share findings. -
Post your video to the GitLab Unfiltered YouTube channel, and link to it from this issue's description. -
Link to your video in the Engineering Week in Review. -
Create a new Dovetail project using the UX scorecard template. Use insights to document any observations or findings that came out of this scorecard. You can use your experience map or video summary to help you curate those. It is important to add insights into Dovetail so they can be shared and accessed by all groups, and used to document cross-stage findings. You can also add any supporting material in Data, such as an exported Mural experience map, but it is not required. Example here. -
Create a recommendation issue for this JTBD and add it to the same stage group epic as this issue. Also add a link to your recommendation issue to this issue.