Skip to content

UX Audit: Synthesize results of scan configuration audit

Summary

As part of Secure: Scan Configuration Evaluation, we are performing an audit of the states and UI patterns for every security scanner with a configuration interface. The goal of the audit is to identify patterns and inconsistencies and create recommendations focused on improving consistency and learnability between the scanners. This issue is for tracking and documenting findings after synthesizing results from the audit.

The parent issue that documents all scanners being audited is linked below:
👉 #340334 (closed)

Plan

  • Evaluate and compare workflow documents for each scanner's configuration process
  • Evaluate and compare screenshots for each scanner's configuration process
  • Generate and document findings/recommendations in an easy to digest way

Findings

📍 Learnability: Enabling and configuring a security tool in GitLab can be challenging

The workflows for enabling security scanners in GitLab are not consistent (ultimate users)

3 of 8 scanners can be enabled via auto-generated MR.

  • 1 of those 3 (SAST) provides configuration options within the UI prior to generating the MR.

5 of 8 scanners must be enabled by editing the CI file

  • 2 of those 5 (DAST & API Fuzzing) provide configuration options within the UI and assist users by generated a code snippet based on their chosen configuration. The code snippet must be manually added to the CI file to enable scanning

Considering the previous 2 points, it's possible for a user to configure 4 security scanners and encounter a different experience EVERY TIME. The image below illustrates a simplified chart of the 4 possible workflows:

secure-scannerConfig-workflows

The way that security testing works in GitLab isn't explained in the UI, so users have to make assumptions (expectation setting)

In general, the instructions and descriptions within the security configuration area could be improved to provide more useful or relevant information. The UI copy mostly focuses on the "what", but doesn't really mention the "how" or the "why". Incorporating this information into the UI could help manage users' expectations about how security works at GitLab.

  1. The fact that security scans run as pipeline jobs is not explicitly mentioned anywhere in the UI. (not knowing about this association could lead confusion when configuring or running a scan)
  2. It's not mentioned in the UI that enabling a security scan requires adding it to the project's CI/CD pipeline
  3. The descriptions of the various scan types are all very similar. Users who are unfamiliar with application security testing tools may have a difficult time trying to decipher the difference between them.
  4. The main security configuration page, for example, makes a lot of assumptions about a user's understanding of security scanning and GitLab as a whole. config-notes
When enabling a scanner, many of the UI configuration options provide little explanation or guidance
  1. SAST does a decent job explaining config options but could use a little refinement. Some items seem overly technical and others are a little vague
  2. DAST has a few issues:
    1. No explanation of what "profiles" are when none exist
    2. When profiles exist, the guidance given to users is a little confusing
    3. When creating a site or scanner profile, many explanations are relatively vague or missing altogether.
  3. API Fuzzing has a couple of fields that have little-to-no explanation
  4. On-demand DAST has the same issues as CI/CD DAST and is lacking descriptions for most on-demand specific configuration options.

📍 Consistency: The workflow to enable a security scanner in GitLab differs from tool to tool

Only 3 scanners provide configuration options within the UI and they're all a little different
  1. The SAST & API Fuzzing configuration forms are very similar in layout and feel but require a different workflow to actually enable the scanner
  2. DAST and API Fuzzing follow the same workflow to be enabled but have very different configuration forms.
  3. The DAST and SAST configuration processes are very different from one another.
Field validation and other error handling is minimal and inconsistent
  • The 3 scanners with UI configuration options (SAST, DAST, API Fuzzing) all handle field errors differently. Most fields have no validation.
DAST offers additional functionality beyond basic CI/CD configuration. It is the only scanner that does this currently
  1. Create and manage site and scanner profiles
  2. On-demand scans
  3. Saved scans
The dynamic analysis tools share a couple key concepts
  1. Both API fuzzing and DAST have the concept of scan profiles
    1. Scan profiles contain configuration details that define how a scanner should test its target. Profiles do not contain information about the scan target itself.
    2. The content of a scan profile is dependent on the type of scanner it is for.
    3. Currently only DAST allows for user-defined scan profiles. API fuzzing offers pre-defined profiles right now but is slated to expand to user-defined profiles in the future
  2. DAST site profiles that are defined as a Rest API don't allow for authentication; API fuzzing does. Is there a reason why one allows for authentication and the other does not?

📍 JTBD evaluation: Of the 3 configuration-related JTBD, none are addressed by every security tool

JTBD 1 (3/6): When I am configuring a CI/CD security scan, I want to specify which assets need to be scanned and under which circumstances, So that I can ensure my assets are secure prior to or at their release.
  1. 3 / 6 scanners evaluated satisfy this job
    1. SAST
    2. DAST CI/CD
    3. DAST on-demand
  2. 3 / 6 scanners evaluated do NOT satisfy this job
    1. API Fuzzing
    2. Dependency scanning
    3. Secret detection
JTBD 2 (1/6): When I am configuring a security scan, I want to specify which types of vulnerabilities the scan should detect, So that we don't waste time sorting through irrelevant findings.
  1. 1 / 6 scanners evaluated satisfy this job
    1. SAST
  2. 5 / 6 scanners evaluated do NOT satisfy this job
    1. DAST CI/CD
    2. DAST on-demand
    3. API Fuzzing
    4. Dependency scanning
    5. Secret detection
JTBD 3 (0/6): When I am either enabling or configuring a security scan, I want to run a demo scan, So that I can validate my configuration before it is implemented
  1. 0 / 6 scanners evaluated satisfy this job
  2. 6 / 6 scanners evaluated do NOT satisfy this job
    1. SAST
    2. DAST CI/CD
    3. DAST on-demand
    4. API Fuzzing
    5. Dependency scanning
    6. Secret detection

📍 Other Insights

  1. The end of the configuration process (merging code to enable the feature) doesn't do a great job assuring users that the scanner is properly enabled. The change in system status is not easily discovered.
  2. Not all configuration options are exposed in the UI
  3. 🤔 While we ultimately want users to adopt application security by integrating security tests into their CI/CD pipelines, we've seen research that suggest this may be too big of an ask for some teams.
    1. 💡 Would the concept of on-demand "security pipelines" be a more approachable intro for those teams? I.e. on-demand security pipelines
Edited by Michael Fangman