Include identifierName arg to group.vulnerabilySeveritiesCount GraphQL
What does this MR do and why?
With !175902 (merged) merged, we want to update the group.vulnerabilySeveritiesCount query to have the same filtering behavior using identifierName argument filtering.
References
- discussion of database queries: #506480 (comment 2248589911)
- Similar to the Include identifierName arg to vulnerabilySeveri... (!173545 - merged) implementation for project level reports.
- Group level drop down API implementation (!175902 - merged) • Bala Kumar, Michael Becker • 17.8
MR acceptance checklist
Please evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Screenshots or screen recordings
How to set up and validate locally
-
Create a group or in a existing group import project https://gitlab.com/gitlab-org/govern/threat-insights-demos/verification-projects/verify-owasp-top-10-grouping into local. After import, go to Build -> pipelines and start a new pipeline this will feed the vulnerabilities data.
-
In rails console enable FF
Feature.enable(:vulnerability_filtering_by_identifier_group, group) -
In GraphIQL run the below equivalent query for the imported project
query {
group(fullPath: "bala-test-group") {
vulnerabilitySeveritiesCount(state: [DETECTED], capped: true, identifierName: "A1:2017 - Injection" ) {
critical
high
info
low
medium
unknown
}
}
}
- It should display only the filtered vulnerability count with the identifier name filter applied as shown in the screenshot above.
SQL 1 2
An example of a group query:
SELECT
COUNT(*) AS "count_all",
"vulnerability_reads"."severity" AS "vulnerability_reads_severity"
FROM
(
(
SELECT
"vulnerability_reads"."id",
"vulnerability_reads"."severity"
FROM
"vulnerability_reads"
WHERE
"vulnerability_reads"."traversal_ids" >= '{11690959,13186817}'
AND "vulnerability_reads"."traversal_ids" < '{11690959,13186818}'
AND "vulnerability_reads"."archived" = FALSE
AND "vulnerability_reads"."report_type" IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 99)
AND "vulnerability_reads"."state" IN (1, 4, 3, 2)
AND (
EXISTS (
SELECT
1
FROM
unnest(vulnerability_reads.identifier_names) AS idt_names
WHERE
idt_names ILIKE '2017'
)
)
AND "vulnerability_reads"."severity" = 1
LIMIT
1001
)
UNION
(
SELECT
"vulnerability_reads"."id",
"vulnerability_reads"."severity"
FROM
"vulnerability_reads"
WHERE
"vulnerability_reads"."traversal_ids" >= '{11690959,13186817}'
AND "vulnerability_reads"."traversal_ids" < '{11690959,13186818}'
AND "vulnerability_reads"."archived" = FALSE
AND "vulnerability_reads"."report_type" IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 99)
AND "vulnerability_reads"."state" IN (1, 4, 3, 2)
AND (
EXISTS (
SELECT
1
FROM
unnest(vulnerability_reads.identifier_names) AS idt_names
WHERE
idt_names ILIKE '2017'
)
)
AND "vulnerability_reads"."severity" = 2
LIMIT
1001
)
UNION
(
SELECT
"vulnerability_reads"."id",
"vulnerability_reads"."severity"
FROM
"vulnerability_reads"
WHERE
"vulnerability_reads"."traversal_ids" >= '{11690959,13186817}'
AND "vulnerability_reads"."traversal_ids" < '{11690959,13186818}'
AND "vulnerability_reads"."archived" = FALSE
AND "vulnerability_reads"."report_type" IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 99)
AND "vulnerability_reads"."state" IN (1, 4, 3, 2)
AND (
EXISTS (
SELECT
1
FROM
unnest(vulnerability_reads.identifier_names) AS idt_names
WHERE
idt_names ILIKE '2017'
)
)
AND "vulnerability_reads"."severity" = 4
LIMIT
1001
)
UNION
(
SELECT
"vulnerability_reads"."id",
"vulnerability_reads"."severity"
FROM
"vulnerability_reads"
WHERE
"vulnerability_reads"."traversal_ids" >= '{11690959,13186817}'
AND "vulnerability_reads"."traversal_ids" < '{11690959,13186818}'
AND "vulnerability_reads"."archived" = FALSE
AND "vulnerability_reads"."report_type" IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 99)
AND "vulnerability_reads"."state" IN (1, 4, 3, 2)
AND (
EXISTS (
SELECT
1
FROM
unnest(vulnerability_reads.identifier_names) AS idt_names
WHERE
idt_names ILIKE '2017'
)
)
AND "vulnerability_reads"."severity" = 5
LIMIT
1001
)
UNION
(
SELECT
"vulnerability_reads"."id",
"vulnerability_reads"."severity"
FROM
"vulnerability_reads"
WHERE
"vulnerability_reads"."traversal_ids" >= '{11690959,13186817}'
AND "vulnerability_reads"."traversal_ids" < '{11690959,13186818}'
AND "vulnerability_reads"."archived" = FALSE
AND "vulnerability_reads"."report_type" IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 99)
AND "vulnerability_reads"."state" IN (1, 4, 3, 2)
AND (
EXISTS (
SELECT
1
FROM
unnest(vulnerability_reads.identifier_names) AS idt_names
WHERE
idt_names ILIKE '2017'
)
)
AND "vulnerability_reads"."severity" = 6
LIMIT
1001
)
UNION
(
SELECT
"vulnerability_reads"."id",
"vulnerability_reads"."severity"
FROM
"vulnerability_reads"
WHERE
"vulnerability_reads"."traversal_ids" >= '{11690959,13186817}'
AND "vulnerability_reads"."traversal_ids" < '{11690959,13186818}'
AND "vulnerability_reads"."archived" = FALSE
AND "vulnerability_reads"."report_type" IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 99)
AND "vulnerability_reads"."state" IN (1, 4, 3, 2)
AND (
EXISTS (
SELECT
1
FROM
unnest(vulnerability_reads.identifier_names) AS idt_names
WHERE
idt_names ILIKE '2017'
)
)
AND "vulnerability_reads"."severity" = 7
LIMIT
1001
)
) vulnerability_reads
GROUP BY
"vulnerability_reads"."severity"
ORDER BY
"vulnerability_reads"."severity" DESC
;
The search that returns 0 rows is an example of the worse case performance in this scenario (since we are making use of an existing index for other filters and perform the text search as the final inner node filter)
We are limiting the feature to groups with <= 20000 vulnerabilities as we are unable to scale beyond with PG limitations.
| API | Identifier not found | Identifier found |
|---|---|---|
| Severities Count API | https://console.postgres.ai/gitlab/gitlab-production-main/sessions/34511/commands/106582 | https://console.postgres.ai/gitlab/gitlab-production-main/sessions/34511/commands/106580 |
Related to #508713 (closed)
-
The explain query is taken from this table of the spike issue. More queries for various vulnerability count distributions can be found in this table on the spike issue. ↩
-
A breakdown of distribution counts can be found in this table on the spike issue. ↩
