UX Scorecard - Secure FY21-Q4 - Security gates (responsibility)
Job to be done
User that is a participant of a project:
JBTD: When a merge request is disallowed, I want to know why, so I can resolve the issue and proceed with the MR.
Users
-
Accountable (maintainer): user that setup the
Vulnerability-Check
and/or is part of the approvers group. -
Responsible (participant): user that is a participant in a project and is unable to proceed with an MR that has an active
Vulnerability-Check
(high or critical or unknown level severity vulnerabilities detected). See experience baseline
Video
Summary (Please read the script or video for details)
As a summary, the experience of using a security gate feature can be used, but also needs to be improved for sure:
-
Problem 1: In settings, there are three approvals, when creating an MR, there is only one show up. User might wonder why? And add existing rules again while during MR creation
-
Problem 2: There is only one approve button. When the user approves UX Check, License-Check will be automatically approved by the user. As a user, can I choose to approve only one?
-
Problem 3: MR can’t be merge, because of approvals needed. But the message above says that is has been approved
- It is not clear for the user, what is approved? What is not?
- CTA “View eligible approvers” is not clear to tell the user there are multiple approvals need to be reviewed
- Problem 4: There are two reasons why MR can’t be merged:
- conflicts
- approvals
As a user, I only see one reason? Shouldn’t I see both and decide what to do? Maybe a security issue is more urgent than solve the conflicts?
Hope that we could solve those problems for the user soon. Suggest solution please see recommendation issue There is discussion around how important those features are for the user, it definitely needs to find out. One aspect we shouldn’t forget is that the reason the user doesn’t use certain feature could be that the feature is not good enough instead of users don’t need them. Any way the improvements detained mentioned in the video and in the recommendation issue should be addressed soon.
Grading Rubric
C (Average) Workflow needs improvement, but the user can still finish completing the task. It usually takes longer to complete the task than it should. User may abandon the process or try again later. Frustration: Medium Task Completion: Successful but with unnecessary steps Steps to Complete Task: Average complexity
Emotional Grading Scale
- Neutral: The user’s expectations were met. Each action provided the basic expected response from the UI, so that the user could complete the task and move forward. Emotion(s): Indifferent
Checklist
-
1. Document the current experience of the JTBD, as if you are the user. Capture the screens and jot down observations. Also, apply the following Emotional Grading Scale to document how a user likely feels at each step of the workflow. Add this documentation to the epic's description. -
2. Use the Grading Rubric below to provide an overall measurement that becomes the Benchmark Score for the experience, and add it to the epic's description. -
3. Once you’re clear about the user’s path, create a clickthrough video that walks through the experience and includes narration of the Emotional Grading Scale and Benchmark Score. -
4. Post your video to the GitLab Unfiltered YouTube channel, and link to it from the epic's description. -
5. If your JTBD spans more than one stage group, that’s great! Review your JTBD with a designer from that stage group for accuracy. -
6. Create an issue to revisit the same JTBD the following quarter to see if we have made improvements. We will use the grades to monitor progress toward improving the overall quality of our user experience.