Secure Benchmarking Results for SAST

What’s this issue all about?

This issue documents the results of the FY2023 Q3 Usability Benchmarking Study for Secure (&7918). The Usability Benchmark had 4 workflows, each with their own tasks and success criteria, which covered four JTDB in the Secure Section.

The goal of this this issue for each stakeholder responsible for a page in one of these tasks to ask themselves "How might we improve the usability for this area" in a way which could potentially fold into our current roadmap, using our UX Themes. Below this description is a Thread created to discuss this for each of the tasks participants ran through.

This issue will cover the results for the tasks involving the SAST Configuration page, which included the following;

Task Code Task Name Description (what participant saw) Success Criteria
scan_1 Go to Security Configuration page 1.Find the configurations for the security scanning tools - Go to Security Configuration page
scan_2 Enable ESLint 2.Ensure a SAST analyzer is configured to scan any Java that is in our project. - Click on "Configure SAST"
- Click on "Expand"
- Check box for Spotbugs
scan_3 Make product secure 3. Review all other SAST analyzers to make sure our product is being scanned as securely as possible. - Review rest of page
- Enable ESLint
- Configure Flawfinder Confidence level = 0
scan_4 Create & merge MR 4. Create an MR and assign to Michael Oliver - Click on 'Create MR'
- Create the MR
- Assign the MR to Michael Oliver
High Level Results

The high level quantitative results for these tasks are:

Task Code Time on Task (sec) Confidence Interal (± sec) Severity 1 Errors Severity 2 Errors Severity 3 Errors Severity 4 Errors Total Errors Completion Percent UMUX Score SUS Score
scan_1 54.1 23.2 2 5 4 23 34 94.4 87.0 71.4
scan_2 96.6 29.3 4 9 7 14 34 77.8 68.6 59.5
scan_3 127.6 41.6 8 9 4 14 35 38.9 77.9 65.5
scan_4 44.4 11.5 1 1 3 5 100.0 98.0 78.6

cc: @jmandell @andyvolpe @connorgilbert @mfangman

Resources
Edited by Michael Oliver