Engineering Discovery: Define performance metrics for Security Products
Problem to solve
The Security Products are currently blindly running and we have no way to validate the performance of our tools.
Intended users
Member from devopssecure stage
Proposal
In order to validate the performance of our tools and avoid regression, we first need to define which metrics we want to track. Implementation and integration within QA are out of scope and will be delegated to other issues of the epic.
Conclusion
- Scan completion time on a specific project (assuming we have 1 per language/scanner) => I understand this as the time of execution of the analyzer command itself, after the CI has downloaded the image and launched the docker container. By doing this we can add that check to our QA pipeline, after extracting that value. We can probably try to get the whole job execution time too, but this will probably be impossible to integrate into our QA. Can be probably monitored differently though.
- size of artifacts => This can be integrated into our QA so we avoid increasing the size of the report like crazy after an update.
- size of the docker images we release. I'm adding this one as to me it can represent a cost (data quotas) and impact performance (time to download).
Sub-Epic 2: Rails side for groupcomposition analysis &2335
- Load time of a Dependency List
- Load times of Security Reports in pipeline view
- Load times of Security Reports in MR view
- Also test MR page load time with significant security reports
Sub-Epic 3: Rails side for groupdynamic analysis &2336 (closed)
- Load time of Group Security Dashboard
- Load time of Project Security Dashboard
- Also test filters impact
Sub-Epic 4: Rails side for groupstatic analysis &2337 (closed)
- Vulnerability Feedback creation/load time (dismissal, issues, MR)
- First-class vulnerability load time (when we have them)
Links / references
Edited by Nicole Schwartz