UX Scorecard(Part 1) - Secure FY21-Q4 - Security gates (accountability)
Summary of the results of this UX scorecard study
As a user, I heard a feature. When new vulnerabilities are detected in a merge request, someone can be assigned as security gatekeeper. So that he/she can disallow the merge request and the team can review the vulnerabilities to resolve or decide on the next steps. I want to enable this feature, add a particular person in my team do fulfil this approve/disapprove task.
Accountable (Maintainer): user that set up the
Vulnerability-Checkand/or is part of the approver's group.
Summary (Please read the script or video for details)
As a summary, the experience of setting up the security gatekeeper is very difficult. First, because discoverability is low. As a user, I would know it, if heard some feature like this, it is almost impossible for me to find it in current UI.
In this case, the user will just give up here.
In a different situation, when user get the documentation already, the user can set it up with the instruction, but it also not easy. For such a small thing, the general expectation is like 5 mins jobs. With our instruction, it make take up 30 mins to read/understand and then finally finish it in settings. But after set it up, it is still confusing for the user, is it done? Is it working?
All in all, when user expect a simple job and they can’t find out how to do it after they get instruction, it takes way longer time than expected, plus the end result is not clear enough, the frustration level is high.
I would rate the emotion level of this experience as Negative and the Grading Rubric score could be D .
Negative: The user did not receive the results they were expecting. There may be bugs, roadblocks, or confusion about what to click on that prevents the user from completing the task. Maybe they even needed to find an alternative method to achieve their goal. Emotion(s): Angry, Frustrated, Confused, Annoyed.
D (Presentable) Workflow has clear issues and should have not gone into production without more thought and testing. User may or may not be able to complete the task. High risk of abandonment. Frustration: High Task Completion: Unlikely, but there may be a chance that there is completion Steps to Complete Task: Excessive
Link to handbook page page about UX Scorecard.
Mention which personas might be performing the job. Keeping personas in mind allows us to use the correct language and make the best decisions to address their specific problems and pain points when writing recommendations.
If your JTBD spans more than one stage group, that’s great! Review your JTBD with a designer from that stage group for accuracy.
Review the current experience, noting where you expect a user's high and low points to be. Capture the screens and jot down observations.
It's also advised that you ask another person (internal or external) relatively new to the workflow to accomplish the JTBD. Record this session, and document their experience of the JTBD. Note that an additional user isn't currently required, but can provide valuable insights that you might not have thought of. Depending on how complex the JTBD is, and how familiar the task is to you, you can invite additional participants so you can get a broad view of the JTBD. If you approach this as a usability study and follow a process approved by a UX Researcher, you may apply an appropriate research label.
Using what you learned in the previous steps, apply the following Emotional Grading Scale to document how a user likely feels at each step of the workflow. Add this documentation to each JTBD issue's description.
- Positive: The user’s experience included a pleasant surprise— something they were not expecting to see. The user enjoyed the experience on the screen and could complete the task, effortlessly moving forward without having to stop and reassess their workflow. Emotion(s): Happy, Motivated, Possibly Surprised
- Neutral: The user’s expectations were met. Each action provided the basic expected response from the UI so that the user could complete the task and move forward. Emotion(s): Indifferent
- Negative: The user did not receive the results they were expecting. There may be bugs, roadblocks, or confusion about what to click on that prevents the user from completing the task. Maybe they even needed to find an alternative method to achieve their goal. Emotion(s): Angry, Frustrated, Confused, Annoyed
Use the Grading Rubric below to provide an overall measurement that becomes the Benchmark Score for the experience (one grade per JTBD) and add it to each JTBD issue's description. Document the score in the UX Scorecard Spreadsheet.
Once you’re clear about the user’s path, create a clickthrough video that documents the existing experience. Begin the video with a contextual introduction including your role, stage group, and a short introduction to your JTBD and purpose of the UX scorecard. This is not a "how-to" video, but instead should help build empathy for users by clearly showing areas of potential frustration and confusion. (You can point out where the experience is positive, too.) The Emotional Grading Scale you documented earlier will help identify areas to call out. At the end of the video, make sure to include narration of the Benchmark Score. Examples here and here
Post your video to the GitLab Unfiltered YouTube channel and link to it from each JTBD issue's description.
Link to your video in the [Engineering Week in Review] (https://docs.google.com/document/d/1Oglq0-rLbPFRNbqCDfHT0-Y3NkVEiHj6UukfYijHyUs/edit#heading=h.wl5oryd6kv3u).
Create an issue to revisit the same JTBD the following quarter to see if we have made improvements. We will use the grades to monitor progress toward improving the overall quality of our user experience. Add that issue as related to each JTBD issue.