Scanned URLs are not printed to the console when DASTs runner times out
### Problem to solve On completion of a DAST job all the URLs spidered by ZAP are printed to the jobs console. However, in the event that the DAST job takes longer than the runner timeout limit, these URLs will not be printed. To allow the user to better understand what DAST scanned in the event of a timeout, the spidered urls should be output to the console before the scan begins. This issue came from [#113](https://gitlab.com/gitlab-org/security-products/dast/-/merge_requests/113#note_298692647) ### Intended users * [Delaney (Development Team Lead)](https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/#delaney-development-team-lead) * [Sasha (Software Developer)](https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/#sasha-software-developer) * [Devon (DevOps Engineer)](https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/#devon-devops-engineer) ### Further details There are `wrap` hooks provided by zaps python api. These are called at the end of a method in ZAProxy's `zap_common.py` methods. Of interest I think are `zap_spider_wrap` and `zap_ajax_spider_wrap`. In the ZAProxy baseline/fullscan code, the zap spider is triggered first, and then the Ajax spider is triggered if enabled. We want all of the spidered values, so in theory we would have to hook into the appropriate wrap hook depending on whether or not Ajax scans are enabled. API scans are also something to consider, because I believe we should be printing out URLs that are found in the API specification. I don't believe there is a hook for this, so in future we will have to add one and hope ZAProxy is willing to merge it. ### Proposal <!-- How are we going to solve the problem? Try to include the user journey! https://about.gitlab.com/handbook/journeys/#user-journey --> ### Permissions and Security <!-- What permissions are required to perform the described actions? Are they consistent with the existing permissions as documented for users, groups, and projects as appropriate? Is the proposed behavior consistent between the UI, API, and other access methods (e.g. email replies)?--> ### Documentation <!-- See the Feature Change Documentation Workflow https://docs.gitlab.com/ee/development/documentation/feature-change-workflow.html Add all known Documentation Requirements here, per https://docs.gitlab.com/ee/development/documentation/feature-change-workflow.html#documentation-requirements If this feature requires changing permissions, this document https://docs.gitlab.com/ee/user/permissions.html must be updated accordingly. --> ### Availability & Testing <!-- This section needs to be retained and filled in during the workflow planning breakdown phase of this feature proposal, if not earlier. What risks does this change pose to our availability? How might it affect the quality of the product? What additional test coverage or changes to tests will be needed? Will it require cross-browser testing? Please list the test areas (unit, integration and end-to-end) that needs to be added or updated to ensure that this feature will work as intended. Please use the list below as guidance. * Unit test changes * Integration test changes * End-to-end test change See the test engineering planning process and reach out to your counterpart Software Engineer in Test for assistance: https://about.gitlab.com/handbook/engineering/quality/test-engineering/#test-planning --> ### What does success look like, and how can we measure that? <!-- Define both the success metrics and acceptance criteria. Note that success metrics indicate the desired business outcomes, while acceptance criteria indicate when the solution is working correctly. If there is no way to measure success, link to an issue that will implement a way to measure this. --> ### What is the type of buyer? <!-- Which leads to: in which enterprise tier should this feature go? See https://about.gitlab.com/handbook/product/pricing/#four-tiers --> ### Links / references
issue