Research: Defend: 3rd Party vulnerability list research

  • Researcher: Camellia
  • Milestone: 12.9
  • Note: It will be in one research session with this issue: #681 (closed)

Business Decision:

  • How could we make it easier for user to determine at a glance which scanner a detection came from?
  • How could we help the user easily manage potentially multiple scanners making the same detection?

Hypothesis:

  1. MVC: By add additional info columns: scanner and identifier to the list, we can help the user determine at a glance which scanner a detection came from and what might be duplicated vulnerabilities found by different scanners

  2. Manuel grouping with Gitlab hints: By providing the feature of Manuel grouping we can:

    • Make it easier for user to identify the same findings
    • Make it easier for user to manage the same findings
    • Make gitlab to learn how to auto group same findings
  3. Auto grouping by Gitlab: This might be the ideal solution for the users

Hypothesis metrics:

  • Did the participant find and use the "scanner and identifier" information on the list page?
  • Did the participant notice duplicate results in the vulnerability list?
  • Was the participant able to use the scanner to help identify the duplications?
  • Did this (the design solution 2/3) meet the participant's expectations?
  • Was the participant able to identify the same findings in solution 2/3 easier?
  • Was the participant able to manage the duplicate findings in solution 2/3 easier?

Goals:

Find out whether MVC(adding both identifier and scanner type column): is good enough for users
Find out how user use the grouping feature
Find out how important/useful the group feature is
Find out how annoying/important duplication vulnerability for users

Objectives:

We want to get the following major question answered:

  • Is solution 1. MVC good enough for users?
  • How important are solution 2 and solution 3 for users?

Participant profile:

Security Analysts (Gitlab Persona: Sam (Security Analyst)) who have experiences manage vulnerabilities for different projects with multiple scanner types

Research Script

https://docs.google.com/document/d/1IWaTpjtwXpz41X7XqA2G_v1IJIJaTKSz8yvSui4fWJk/edit?usp=sharing

Rainbow Note-taking sheet:

https://docs.google.com/spreadsheets/d/1sBZxAg4nlrmZEAp_NmOdUoKy_v9Sc3TpXMj8A1ZdxQE/edit?usp=sharing

Recruiting issue:

https://gitlab.com/gitlab-org/ux-research/issues/707

Screener:

https://docs.google.com/document/d/1-Zkg2mkivogH2rEHzvYna5GTGNFnFp2GICzgFVIeaRI/edit?usp=sharing

Research result

Highlights
  • It is difficult to pay attention the same vulnerabilities from different scanners on a normal list view (sorted by severity)
  • After moderator point out the similar ones, Line of code, scanner name and identifier helps users realised what has happened
  • Grouping feature received a positive response from all testing users.
    • Group is a good way to deal with the situation of duplicated findings from multi-scanners
    • Interaction is easy to use: majority notice "arrow icon" to expand
  • Manuel Grouping is understood by most people
  • Auto-grouping feature is prefered by all users compare to Manuel grouping
    • All user's motivation to use this feature is to save time
    • All user have a tendency to trust the bot after they try it out or understand how the bot works
    • Some user would like to understand/edit the bot
Raw interview materials: link
UXR insight project: Epic: link

Design decisions and extra ideas

Extra ideas:

  • Detected date could be useful
  • The bot icon is not self-explanatory, maybe a different one or different colour…
  • Edit the bot algorithm, advanced settings
  • turn auto grouping for different projects/severity levels etc

Research task-list

  • (Designer): Create research issue
  • (Designer): Sync plan with PM/Researchers
  • (Designer): Create recruiting issue
  • (Designer): Create Screener
  • (Designer): Sync with Research coordinators
  • (Designer): Create Calenderly link and share with Research coordinators
  • (Coordinator) Start recruiting
  • (Designer): Create Script
  • (Researcher) Review Script
  • (Designer): Create rainbow note taking sheet
  • (Designer): finalize prototyping
  • (Designer): make sure this is a note taker
  • (Designer): dry run/pre-research sync meeting
  • (Designer): Conduct one usability testing session. Amend script if necessary.
  • (Designer): Conduct remaining usability testing sessions.
  • (Designer): Open an Incentives request. Assign it to the relevant Research Coordinator.
  • (Coordinator): Pay users.
  • (Designer): Synthesize the data and identify trends, resulting in findings.
  • (Researcher): Review findings and provide feedback, if needed.
  • (Designer) : Create issues in the UXR_Insights project documenting the findings.
  • (Researcher): Sense check the documented findings.
  • (Researcher): Update the Solution validation research issue. Link to findings in the UXR_Insights project. Unmark as confidential if applicable. Close issue.
Edited by Camellia X Yang