Competitive Analysis: Severity and confidence scoring for vulnerabilities

Background:

Today we use a scoring system, specifically with Confidence that could lead to confusion with users. We should conduct a competitive analysis to determine what is best practice for the industry and align our scoring and labels to that.

Scope:

This is to determine labeling practices in the UI but could have an impact on our reports based on the information discovered.

Process:

  • Generate a list of competitors to analyze
  • Identify how they are scoring their vulnerabilities in the UI:
    • Labeling and taxonomy
    • Color coding and symbolism
  • Aggregate a report summary
  • Communicate summary to the ~Secure team
  • Create discovery issue to ideate on solutions from the findings and recommendations
  • Create issue to update documentation with new labels and definition (if necessary)

Summary:

All competitors analyzed mapped their severity ratings either directly to cvss or used their own adjusted severity scale based on cvss scoring. None of these competitors surfaced confidence as a first-level UI element (severity, vulnerability title, etc...) Why is this? It's my assumption that the cvss rating is comprised of several factors (link here), including confidence. When you are mapping to cvss, it is not necessary to surface confidence because there are other factors that need be taken into account when confirming a vulnerability or dismissing it (false positive, ignore).

Findings Overview:

  • Of the 11 competitors included in the analysis none surface confidence.
  • There are two ways to map to severity identified from the study. Competitors either mapped directly or used an adjusted severity scale based off CVSS.
  • UI treatments of severity slightly differed but stayed within a common framework
Critical High Medium Low Informational
Dark Red or red Red or Orange Orange - Yellow Green - Blue Green - Blue
  • Severity vernacular differed slightly from competitor to competitor though the most common were:
High, medium, low, info

I am hesitant on adopting an adjusted scale for now since the first thing we need to do is map to CVSS.

  • Almost all competitors have accessibility issues with their severity labeling conventions.

Recommendations:

  1. Map to CVSS scoring and use a direct method. Direct mapping to CVSS will put us on the right patch initially until we deem a departure or adjustment is needed. SourceClear does the best job of this both in their labeling and UI treatment.
Critical, high, medium, low, info. 
Scoring Critical High Medium Low Info
CVSS available 9 - 10 7.0 - 8.9 4 - 6.9 0.1 - 3.9 0
No CVSS but Severity is available from OSS Critical High Medium Low Unknown, Informational, experimental
  1. Conduct a brief discovery issue to address the design and color coding of severity labels.
  2. Remove confidence from the list views and surface them in the "more info" drawer for the time being.
  3. Research: When we begin mapping to CVSS we can do some iterative research to identify what other risk metrics we should include in the list view.
  4. Update our documentation to provide a description of our severity ratings and how they are calculated.

📓 Link to the full research report

Edited Aug 15, 2019 by Andy Volpe
Assignee Loading
Time tracking Loading