💡 Problem Validation Research: DAST User research
What's this issue all about? (Background and context)
GitLab's DAST functionality is at "Viable" maturity at this time. We know that it is not completely meeting customer needs, especially where false positives are concerned. At this point, many DAST customers are focused on the reliability of the results, since that is the most important thing for any AST tool. However, because of that focus, we have yet to decide on which features we should have in place in order for us to say that our DAST tool is “complete”. We have some assumptions and hypotheses regarding what we believe to be necessary, but these need to be validated with our customers.
What are the overarching goals for the research?
- To learn what users expect from a DAST tool.
- Uncover JsTBS which represent what it is that people aim to accomplish with DAST.
What hypotheses and/or assumptions do you have?
What are their expectations from a DAST tool?
- My assumption is that we will hear that users want a reliable tool that doesn't have unexpected crashes or exits from the test, has a low amount of false positives, has a high number of vulnerabilities/vulnerability types scanned for (including OWASP Top 10), is easy to configure, has easy to consume results (through both the UI and through an API), is able to run scans whenever they need (not just when new code is merged), provides a wide range configurability (the top ones I believe will be authentication options and header and cookie configuration), and gives them enough detailed information that they can remediate the issue.
What is their DAST workflow, leading up to and including remediation?
- I believe that for many DAST users, the workflow would be:
- Configure tool for the app by providing the URL to scan, authentication information, and customization of headers and cookies for their app
- Start a scan or schedule recurring scans
- Look at the vulnerability report to find the top, most critical items
- Send those items as a bug or other ticket to the engineering group responsible for the part of the app that the vulnerability was found in, including all relevant info in the ticket, including configuration, logs, and detailed reports
How do they consume results (API? UI? Both?) and what do they do with them?
- I think that the answers to this will depend on whether the team using the tool is integrating the tool into other applications, such as ticketing systems. I think that all users will expect detailed reporting in the UI, but I am not sure how many responses we will have where the API is more important. I think that using the reports in the UI, they will present their findings to their managers and the executives about their risk profile. Using the more detailed data either in the UI or the API, they will report back to the engineering teams about the vulnerabilities and how to fix them.
What do users value and find useful about their DAST tool?
- Detailed and accurate reports
- Ease of use for configuration and running scans
- Remediation suggestions (when known)
What are users’ pain points with regards to using their DAST tool?
- False positives
- Difficulty of initial configuration
- Lack of full application coverage or discovery
How (and whether) does DAST interact with their CI/CD pipeline. If it doesn’t, why, and would they want it to?
- My assumption is that right now, most users of traditional DAST tools will not have DAST integrated or interacting with their CI/CD pipelines. I think that the length of DAST scans and separation of responsibilities that typically push DAST testing further right in the SDLC make it to where DAST is typically a security validation task after development is finished.
What research questions are you trying to answer?
- What are their expectations from a DAST tool? (Not necessarily out of GitLab’s, but in general. From this we could deduce what major areas they would expect us to cover)
- What is their DAST workflow, leading up to and including remediation? (From this we could map out user stories, as well as get further understanding on what their expectations are)
- How do they consume results (API? UI? Both?) and what do they do with them?
- What do users value and find useful about their DAST tool?
- What are users’ pain points with regards to using their DAST tool?
- How (and whether) does DAST interact with their CI/CD pipeline. If it doesn’t, why, and would they want it to?
What persona, persona segment, or customer type experiences the problem most acutely?
Initially, we believe that Sam and Simone experience the lack of maturity in our DAST tool most acutely.
However, as AST testing becomes more available and we are able to shift the paradigm left, we believe that Sasha and Devon will also feel the need for a more robust DAST tool in the DevOps lifecycle.
What business decisions will be made based on this information?
This research will support decision making around which features should be included in GitLab’s DAST offering for it to be considered “complete” maturity.
What decisions we expect the insights from this research will lead us to:
- Greater than 50% reduction in false positives
- Easier configuration of DAST scans and the possible options
- Scriptable or recordable tests for specific workflows
- The ability to run a “one-off” scan, not related to a merge request
- User agent customization
- Header customization
- Cookie customization (including authentication)
- SSO authentication
What, if any, relevant prior research already exists?
None that I am aware of.
What timescales do you have in mind for the research?
Since DAST is scheduled to reach "complete" maturity by the end of this year, I would like for a large portion of the research to be done in the next two months.
Who will be leading the research?
Relevant links (opportunity canvas, discussion guide, notes, etc.)
PM: complete the research brief.
UXR: review research brief and provide feedback.
PM: draft a discussion guide. (for this study, UXR to draft and PM to refine)
UXR: review discussion guide and provide feedback.
UXR: draft a screener (this is done simultaneously with drafting a discussion guide).
PM: review the screener.
UXR: open a recruiting request, and assist with scheduling participants.
PM: moderate interviews. (UXR can also assist with moderating)
UXR: update the recruiting request, to ask the Research Coordinator to reimburse participants.
PM + UXR: synthesize the data collaboratively.
UXR: document the findings in the UXR_Insights project.
UXR: link the epic containing all research insights to this issue, and, if applicable, unmarks this issue as confidential before closing it.
PM (optional): hold a debrief session with PD and other interested stakeholders.