Guidance on evaluating and comparing results from 3rd party security scanners with GitLab tools
Periodically, the GitLab Support team gets tickets from customers who are evaluating using GitLab's security scanners (SAST, DAST, etcetera) by comparing results. As a part of the comparison process, they open tickets asking us to explain the differences in results between 3rd party scanners and our scanners running against the customer's code (or something intentionally vulnerable like Juice Shop.
We need to work to set expectations with customers about what kinds of evaluations are feasible or not. We need to make sure the GitLab Support team has what they need to determine which scan results should be investigated and what kinds of questions to ask to assemble a well-written RFH. Some of these requests result in
The solution for this issue would likely include...
- an update to the handbook
- including but not limited to the Support workflows
- an update to the docs
- one or more Zendesk macros