UX Scorecard - Secure - FY22-Q2 - On-demand DAST Scan
- Personas: Sam (Security Analyst)
- Previous score and scorecard: Unknown
- Benchmark score: C
- Walkthrough video: https://youtu.be/E5KZK_SUJeE
- Recommendations: #1612 (comment 575757488) (closed recommendations issue: #1615 (closed))
👨 Persona(s)
💼 JTBD
When I am assessing the security of my application in production, I want to know whether my app is currently vulnerable, so that I can address detected business-critical vulnerabilities.
🚶 Experience Walkthrough
Expand
-
First, let's see if there are any known vulnerabilities. (Navigates from
Project Overview
(start) >Security & Compliance
>Vulnerability Report
) -
Okay, it doesn't look like there has been any testing done on this project. Let's click on the
Configure security testing
button and see if we can run a test. -
Okay... there's a lot going on here. I know I want to run a DAST scan so I'll focus on that. Interesting that there are two DAST options here. I want to run a scan on my production environment, so the first option that runs DAST on a review app isn't right. I see that
DAST Scans
are "Available for on-demand DAST". Let's click on manage and see where that leads. -
It doesn't look like anything has been configured here, but the page description tells me these items can be used for on-demand scans. I think that's what I want. Before I get too into the weeds, let's check out the
On-demand Scans
section under theSecurity and Compliance
menu and see if it's what I'm looking for. -
Based on the page title and description, this looks right! It seems like I can set up my scanner and site profiles from here too. Let's dig in and see if we can get a scan running. (Fills in
Scan name
,Description
, andBranch
. Starts process of adding a scanner profile) -
Most of this looks pretty standard. Since I'm testing a production environment, let's play it safe and run a passive scan for now. I'll leave the default settings in place for the timeout fields. Just a couple more options to go... I don't need any debug messages right now so I'll ignore that. But what is this AJAX Spider? Let's explore... (user hovers in tooltip)
-
Hmm... the tooltip helps me understand what will happen if I enable the AJAX spider, but I'm not exactly sure why I should use this? I guess more coverage is usually a good thing, right? Let's turn it on and keep moving.
-
Great, my scanner profile has been added. Now I just need to create a site profile and we're ready to start a scan. Let's dig into that.
-
Okay, this looks pretty standard too. Let's add the site and authentication info for the production environment.
-
Awesome, my profiles are set up and it looks like I'm ready to run a scan! Wait a minute... why does it say my site is not validated? I don't recall being asked to validate it when creating the site profile
🤔 . Let's take another look and see if I missed something... -
Nope... nothing about validating my site here
😕 . It looks like I'm still able to run my scan though. Let's just start it and see what happens🤷 ♂️ -
Okay, good! My DAST scan is underway. Wait... what is this title at the top of the page? I don't remember adding that anywhere in the scan configuration. Hopefully I set everything up properly. Nevertheless, my scan is running. Let's see if I can get more info about what's happening.
-
I see that the scan is in progress. I wonder how long it will take to complete? The log on this page offers some great information and transparency into the scanning process, but it probably takes some time to read through and understand what's going on. I guess we'll just let it do its thing and check back later. I'll go do something else and come back in a bit.
-
Okay, let's see how our scan is doing. How do I get back to the scan progress screen? (Navigates from
Project Overview
>Security & Compliance
>On-demand scans
) -
Hmm... this isn't right? Maybe I can find it under
Manage DAST scans
? -
Ahh yes, here's the scan I set up. I don't see a status, but it looks like I can run it again so I guess it's done? Let's click through and see if I can check on its status.
-
Nope... this isn't right. I don't want to run a new scan, I want to check on the scan I just ran. I'm confused? Hmm... I guess let's check if there's anything new in the
Vulnerability Report
? -
Perfect! I now see vulnerabilities here so something definitely happened. All of the vulns are from a DAST scanner so they must be related to the scan I just ran. Is the scan done though? I see this page was last updated 6 minutes ago. That must be the scan I just ran. Let's click on the link and see if it's complete.
-
Okay, it looks like the scan is done. Let's jump back to the report and dig into the findings
-
This is exactly what I needed. Turns out my app is vulnerable, so I have some more work to do. Time to triage the results
🙌
🔍 Self-heuristic Evaluation
Expand
User High Points
- The
On-demand Scans
configuration page allows users to create site and scanner profiles without having to navigate to theManage DAST scans
page (Flexibility and efficiency of use – Flexible processes can be carried out in different ways, so that people can pick whichever method works for them) - When a new site or scanner profile is created, it's pre-selected when returning back to On-demand DAST scan configuration page
User Low Points
-
When setting up a scanner profile, there's no easy way for a user to figure out why they should turn on the AJAX spider. The current tooltip does a good job explaining what will happen but falls short at explaining why a user might want to use the AJAX Spider. I'd guess that not all users will know what this feature is for. (Help and documentation – It’s best if the system doesn’t need any additional explanation. However, it may be necessary to provide documentation to help users understand how to complete their tasks)
-
When setting up a site profile, the
Excluded URLs
field's tooltip indicates that the field only applies to authenticated scans. If this is the case, why can I configure this field before I've chosen to enable authentication? If a user didn't read the supporting tooltip, they'd be confused why the feature didn't work. (Error Prevention – Mistakes are conscious errors based on a mismatch between the user’s mental model and the design) -
After adding a Site Profile, a user is taken back to the On-demand scan configuration page. On that page, they'll find their new profile pre-selected (high point), but it has
(Not Validated)
appended to the end of its name. There is no explanation of what this means or if anything needs to be done about it. A user may try to edit their profile in hopes to figure this out but they'll be met with the same absence of information on the edit screen. (Help and documentation – Whenever possible, present the documentation in context right at the moment that the user requires it.) -
If a user wants to configure a new site or scanner profile in a project that already has existing profiles, they may find themselves stumbling around a bit before figuring out how to complete their task. There doesn't appear to be an obvious signifier for this action on the page. The action is currently hidden within the
Use existing [ site / scanner ] profile
menu. Considering the menu's label, this doesn't seem like the first place most users would look. (Match between system and the real world - When a design’s controls follow real-world conventions and correspond to desired outcomes, it’s easier for users to learn and remember how the interface works. This helps to build an experience that feels intuitive.) -
If a user runs an on-demand scan and navigates away to do something else in GitLab, it's not easy for them to come back and check on the status of their scan. (Visibility of system status - When users know the current system status, they learn the outcome of their prior interactions and determine next steps).
- I'd be interested to learn if users associate on-demand scans with the CI/CD pipeline?
- There are no signifiers in the
On-demand scans
section to help users locate their recent scans- I'd guess that most users would navigate to this section to check in on recent on-demand scans (this may be worth testing in the CMS study?)
- The vulnerability report links to recently completed jobs, but only if they're successful (I believe). How are failed security jobs communicated to users? Anything other than the CI/CD jobs?
👉 Recommendations
Expand
High-level recommendations from this scorecard study are listed below. Being that most of them are already in motion (or have been discussed), I've opted to share them here and link to the relevant issues as opposed to creating a separate recommendations issue
- AJAX Spider clarification (clarify why do users want to enable it for their DAST scans?)
- Not sure if this is an actually issues for users yet. Upon reviewing the results from the On-Demand DAST Solution Validation initiative done in Aug 2020, study participants didn't seem to get hung up on this
- Excluded URLs → Verify if it is for authenticated scans only? If so, relocate the field
- I've found existing designs that fix this issue. I'll need to touch base with @annabeldunstone to confirm if this design is old or upcoming. View design here
- Not Validated Site Profile (in creation process)
- Open issue for validation (this will solve the issue in a different way): gitlab#329124 (closed)
- Long comment thread on validation can be found here: gitlab#255338 (closed)
- Creating a new profile when profiles already exist → no clear signifier for the creation process
- Difficult to location on-demand scans and check progress unless you keep the pipeline/job pages open
- Camellia already has an issue open for this: gitlab#330771
UX Scorecard Checklist
Learn more about UX Scorecards
-
Add this issue to the stage group epic for the corresponding quarter's UX scorecards. Verify that the "UX scorecard" label is applied. -
After working with your PM to identify a top job, write it using the Job to Be Done (JTBD) format: When [situation], I want to [motivation], so I can [expected outcome]. Review with your manager to ensure your JTBD is written at the appropriate level. Remember, a JTBD is not a user story and should not lead directly to a solution. -
Make note of which personas might be performing the job, and link to them from this issue's description. Keeping personas in mind allows us to make the best decisions to address specific problems and pain points. Note: Do not include a persona in your JTBD format, as multiple types of users may complete the same job. -
If your JTBD spans more than one stage group, that’s great! Review your JTBD with a designer from that stage group for accuracy. -
Review the current experience, noting where you expect a user's high and low points to be based on NN/g's 10 Usability Heuristics for User Interface Design. Capture the screens and jot down observations. - If you're re-scoring the experience, recapture the entire flow. You will likely have some of the artifacts (i.e. a UI screen that wasn't changed) that you can simply reuse.
-
Have internal or external users accomplish the JTBD. Record this session and document their experience. Note that 3-5 users are preferred, as this provides valuable insights and removes subjectivity. Make sure to avoid setting up a task-based usability study. The goal is to provide the participant context (the JTBD) and listen and watch how they attempt to complete the job. What we learn may differ from participant to participant. - If it's not possible to have internal or external users go through the experience, a self-heurisitic evaluation can be done instead.
-
Use the Grading Rubric to provide an overall measurement that becomes the Benchmark score for the experience (one grade per JTBD), and add it to this issue's description. Document the score in the UX Scorecard Spreadsheet. -
Once testing is complete, create a walkthrough video that documents what you experienced/witnessed within the existing experience. Begin the video with a contextual introduction including: your role, stage group, specifiy how you acquired the data (ex: internal or external users, or self-heurisitic evaluation), and a short introduction to your JTBD and purpose of the UX scorecard. This is not a "how to" video, but instead should help build empathy for users by clearly showing areas of potential frustration and confusion. (You can point out where the experience is positive, too.) At the end of the video, make sure to include narration of the Benchmark Score. Examples here and here. - If you're re-scoring the experience, walkthrough the entire flow again. For narration, you can highlight the recent improvements but still call out any areas that could still use some tweaking (in the next round of iterations, if applicable). The re-score video, in theory, should be shorter since we've hopefully eliminated a few bumps in the user flow.
-
Post your video to the GitLab Unfiltered YouTube channel, and link to it from this issue's description. -
Link to your video in the Engineering Week in Review. -
Create a recommendation issue for this JTBD and add it to the same stage group epic as this issue. Also add a link to your recommendation issue to this issue. -
Following the UX Scorecards setup instructions, create an issue (and epic, if needed) to rescore the same JTBD the following quarter to see if we have made improvements. We will use the grades to monitor progress toward improving the overall quality of our user experience. Add that issue as related to this issue.