Skip to content
Snippets Groups Projects

Add bulk severity override mutation

Merged Gal Katz requested to merge gkatz_bulk_vulnerabilities_severity_override into master
All threads resolved!

What does this MR do and why?

Adds a new GraphQL mutation for bulk severity overriding vulnerability severity. This MR moves major parts of the existing logic of BulkDismissService to a newly introduced BaseBulkUpdateService, which is also used by the BulkSeverityOverrideService.

Relevant issue:

Add a GraphQL mutation for bulk vulnerabilities... (#512808 - closed) • Gal Katz • 17.9
Bug: Bulk vulnerability state change creates br... (#514098 - closed) • Gal Katz • 17.9

Query plans

'authorized_and_ff_enabled_for_all_projects?' under baseService :

Raw SQL
SELECT DISTINCT "vulnerabilities"."project_id" FROM "vulnerabilities" WHERE "vulnerabilities"."id" IN (156926166, 156926162)
SELECT
    "projects"."id",
    "projects"."name",
    "projects"."path",
    "projects"."description",
    "projects"."created_at",
    "projects"."updated_at",
    "projects"."creator_id",
    "projects"."namespace_id",
    "projects"."last_activity_at",
    "projects"."import_url",
    "projects"."visibility_level",
    "projects"."archived",
    "projects"."avatar",
    "projects"."merge_requests_template",
    "projects"."star_count",
    "projects"."merge_requests_rebase_enabled",
    "projects"."import_type",
    "projects"."import_source",
    "projects"."approvals_before_merge",
    "projects"."reset_approvals_on_push",
    "projects"."merge_requests_ff_only_enabled",
    "projects"."issues_template",
    "projects"."mirror",
    "projects"."mirror_last_update_at",
    "projects"."mirror_last_successful_update_at",
    "projects"."mirror_user_id",
    "projects"."shared_runners_enabled",
    "projects"."runners_token",
    "projects"."build_allow_git_fetch",
    "projects"."build_timeout",
    "projects"."mirror_trigger_builds",
    "projects"."pending_delete",
    "projects"."public_builds",
    "projects"."last_repository_check_failed",
    "projects"."last_repository_check_at",
    "projects"."only_allow_merge_if_pipeline_succeeds",
    "projects"."has_external_issue_tracker",
    "projects"."repository_storage",
    "projects"."repository_read_only",
    "projects"."request_access_enabled",
    "projects"."has_external_wiki",
    "projects"."ci_config_path",
    "projects"."lfs_enabled",
    "projects"."description_html",
    "projects"."only_allow_merge_if_all_discussions_are_resolved",
    "projects"."repository_size_limit",
    "projects"."printing_merge_request_link_enabled",
    "projects"."auto_cancel_pending_pipelines",
    "projects"."service_desk_enabled",
    "projects"."cached_markdown_version",
    "projects"."delete_error",
    "projects"."last_repository_updated_at",
    "projects"."disable_overriding_approvers_per_merge_request",
    "projects"."storage_version",
    "projects"."resolve_outdated_diff_discussions",
    "projects"."remote_mirror_available_overridden",
    "projects"."only_mirror_protected_branches",
    "projects"."pull_mirror_available_overridden",
    "projects"."jobs_cache_index",
    "projects"."external_authorization_classification_label",
    "projects"."mirror_overwrites_diverged_branches",
    "projects"."pages_https_only",
    "projects"."external_webhook_token",
    "projects"."packages_enabled",
    "projects"."merge_requests_author_approval",
    "projects"."pool_repository_id",
    "projects"."runners_token_encrypted",
    "projects"."bfg_object_map",
    "projects"."detected_repository_languages",
    "projects"."merge_requests_disable_committers_approval",
    "projects"."require_password_to_approve",
    "projects"."max_pages_size",
    "projects"."max_artifacts_size",
    "projects"."pull_mirror_branch_prefix",
    "projects"."remove_source_branch_after_merge",
    "projects"."marked_for_deletion_at",
    "projects"."marked_for_deletion_by_user_id",
    "projects"."autoclose_referenced_issues",
    "projects"."suggestion_commit_message",
    "projects"."project_namespace_id",
    "projects"."hidden",
    "projects"."organization_id"
FROM
    "projects"
WHERE
    "projects"."id" = 13083
Query plan

link here.

 Unique  (cost=6.83..6.84 rows=2 width=8) (actual time=6.244..6.247 rows=1 loops=1)
   Buffers: shared hit=10 read=6 dirtied=1
   WAL: records=1 fpi=1 bytes=8033
   I/O Timings: read=6.136 write=0.000
   ->  Sort  (cost=6.83..6.83 rows=2 width=8) (actual time=6.234..6.235 rows=2 loops=1)
         Sort Key: vulnerabilities.project_id
         Sort Method: quicksort  Memory: 25kB
         Buffers: shared hit=10 read=6 dirtied=1
         WAL: records=1 fpi=1 bytes=8033
         I/O Timings: read=6.136 write=0.000
         ->  Index Scan using vulnerabilities_pkey on public.vulnerabilities  (cost=0.57..6.82 rows=2 width=8) (actual time=5.715..6.206 rows=2 loops=1)
               Index Cond: (vulnerabilities.id = ANY ('{156926166,156926162}'::bigint[]))
               Buffers: shared hit=7 read=6 dirtied=1
               WAL: records=1 fpi=1 bytes=8033
               I/O Timings: read=6.136 write=0.000
Settings: seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5'

another link here.

 Index Scan using idx_projects_on_repository_storage_last_repository_updated_at on public.projects  (cost=0.56..3.58 rows=1 width=849) (actual time=15.659..15.665 rows=1 loops=1)
   Index Cond: (projects.id = 13083)
   Buffers: shared hit=1 read=11 dirtied=6
   WAL: records=11 fpi=6 bytes=41643
   I/O Timings: read=15.396 write=0.000
Settings: work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4'

vulnerabilities_to_update in BulkSeverityOverrideService:

Raw SQL
SELECT "vulnerabilities".* FROM "vulnerabilities" WHERE "vulnerabilities"."id" IN (639, 640) AND "vulnerabilities"."severity" != 7
Query plan

link here.

 Index Scan using vulnerabilities_pkey on public.vulnerabilities  (cost=0.57..6.82 rows=2 width=473) (actual time=0.027..0.032 rows=2 loops=1)
   Index Cond: (vulnerabilities.id = ANY ('{156926166,156926162}'::bigint[]))
   Filter: (vulnerabilities.severity <> 7)
   Rows Removed by Filter: 0
   Buffers: shared hit=13
   I/O Timings: read=0.000 write=0.000
Settings: work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4'

vulnerabilities_attributes in BulkSeverityOverrideService:

Raw SQL
SELECT "vulnerabilities"."id", "vulnerabilities"."severity", "vulnerabilities"."project_id" FROM "vulnerabilities" WHERE "vulnerabilities"."id" IN (156926166, 156926162) AND "vulnerabilities"."severity" != 7
Query plan

link here.

 Index Scan using vulnerabilities_pkey on public.vulnerabilities  (cost=0.57..6.82 rows=2 width=18) (actual time=0.029..0.032 rows=2 loops=1)
   Index Cond: (vulnerabilities.id = ANY ('{156926166,156926162}'::bigint[]))
   Filter: (vulnerabilities.severity <> 7)
   Rows Removed by Filter: 0
   Buffers: shared hit=13
   I/O Timings: read=0.000 write=0.000
Settings: effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB'

update_support_tables in BulkSeverityOverrideService :

Raw SQL First:
UPDATE
    "vulnerability_occurrences"
SET
    "severity" = 7
WHERE
    "vulnerability_occurrences"."vulnerability_id" IN (
        SELECT
            "vulnerabilities"."id"
        FROM
            "vulnerabilities"
        WHERE
            "vulnerabilities"."id" IN (156926166, 156926162)
            AND "vulnerabilities"."severity" != 7)

Second:

UPDATE
    "security_findings"
SET
    "severity" = 7
WHERE
    "security_findings"."uuid" IN (
        SELECT
            "vulnerability_occurrences"."uuid"
        FROM
            "vulnerability_occurrences"
        WHERE
            "vulnerability_occurrences"."vulnerability_id" IN (
                SELECT
                    "vulnerabilities"."id"
                FROM
                    "vulnerabilities"
                WHERE
                    "vulnerabilities"."id" IN (156926166, 156926162)
                    AND "vulnerabilities"."severity" != 7))

Third:

INSERT INTO "vulnerability_severity_overrides" ("vulnerability_id", "original_severity", "new_severity", "project_id", "author_id", "created_at", "updated_at")
    VALUES (156926166, 5, 7, 23, 1, '2025-01-19 14:54:21.274325', '2025-01-19 14:54:21.274325'),
    (156926162, 5, 7, 23, 1, '2025-01-19 14:54:21.274325', '2025-01-19 14:54:21.274325')
RETURNING
    "id"
Query plan
  1. link here.
 ModifyTable on public.vulnerability_occurrences  (cost=1.14..14.01 rows=0 width=0) (actual time=80.109..80.111 rows=0 loops=1)
   Buffers: shared hit=100 read=54 dirtied=15 written=1
   WAL: records=22 fpi=13 bytes=78481
   I/O Timings: read=78.700 write=0.054
   ->  Nested Loop  (cost=1.14..14.01 rows=2 width=14) (actual time=10.259..10.829 rows=2 loops=1)
         Buffers: shared hit=19 read=4
         I/O Timings: read=10.722 write=0.000
         ->  Index Scan using vulnerabilities_pkey on public.vulnerabilities  (cost=0.57..6.82 rows=2 width=14) (actual time=0.028..0.046 rows=2 loops=1)
               Index Cond: (vulnerabilities.id = ANY ('{156926166,156926162}'::bigint[]))
               Filter: (vulnerabilities.severity <> 7)
               Rows Removed by Filter: 0
               Buffers: shared hit=13
               I/O Timings: read=0.000 write=0.000
         ->  Index Scan using index_vulnerability_occurrences_on_vulnerability_id on public.vulnerability_occurrences  (cost=0.57..3.58 rows=1 width=14) (actual time=5.383..5.386 rows=1 loops=2)
               Index Cond: (vulnerability_occurrences.vulnerability_id = vulnerabilities.id)
               Buffers: shared hit=6 read=4
               I/O Timings: read=10.722 write=0.000
Trigger trigger_insert_or_update_vulnerability_reads_from_occurrences for constraint : time=2.473 calls=2
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
  1. link here.
 ModifyTable on public.security_findings  (cost=14.59..3521.66 rows=0 width=0) (actual time=200.130..200.142 rows=0 loops=1)
   Buffers: shared hit=144 read=154 dirtied=14
   WAL: records=20 fpi=12 bytes=70492
   I/O Timings: read=197.670 write=0.000
   ->  Nested Loop  (cost=14.59..3521.66 rows=17 width=24) (actual time=87.868..157.145 rows=2 loops=1)
         Buffers: shared hit=48 read=120
         I/O Timings: read=156.236 write=0.000
         ->  Unique  (cost=14.02..14.03 rows=2 width=28) (actual time=0.087..0.100 rows=2 loops=1)
               Buffers: shared hit=30
               I/O Timings: read=0.000 write=0.000
               ->  Sort  (cost=14.02..14.03 rows=2 width=28) (actual time=0.085..0.091 rows=2 loops=1)
                     Sort Key: vulnerability_occurrences.uuid
                     Sort Method: quicksort  Memory: 25kB
                     Buffers: shared hit=30
                     I/O Timings: read=0.000 write=0.000
                     ->  Nested Loop  (cost=1.14..14.01 rows=2 width=28) (actual time=0.041..0.066 rows=2 loops=1)
                           Buffers: shared hit=27
                           I/O Timings: read=0.000 write=0.000
                           ->  Index Scan using vulnerabilities_pkey on public.vulnerabilities  (cost=0.57..6.82 rows=2 width=14) (actual time=0.018..0.023 rows=2 loops=1)
                                 Index Cond: (vulnerabilities.id = ANY ('{156926166,156926162}'::bigint[]))
                                 Filter: (vulnerabilities.severity <> 7)
                                 Rows Removed by Filter: 0
                                 Buffers: shared hit=13
                                 I/O Timings: read=0.000 write=0.000
                           ->  Index Scan using index_vulnerability_occurrences_on_vulnerability_id on public.vulnerability_occurrences  (cost=0.57..3.58 rows=1 width=30) (actual time=0.015..0.020 rows=1 loops=2)
                                 Index Cond: (vulnerability_occurrences.vulnerability_id = vulnerabilities.id)
                                 Buffers: shared hit=14
                                 I/O Timings: read=0.000 write=0.000
         ->  Append  (cost=0.57..1737.61 rows=1621 width=26) (actual time=74.838..78.515 rows=1 loops=2)
               Buffers: shared hit=18 read=120
               I/O Timings: read=156.236 write=0.000
               ->  Index Scan using security_findings_143_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_143 security_findings_1  (cost=0.57..103.46 rows=97 width=26) (actual time=4.892..4.892 rows=0 loops=2)
                     Index Cond: (security_findings_1.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=9.738 write=0.000
               ->  Index Scan using security_findings_144_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_144 security_findings_2  (cost=0.57..123.42 rows=116 width=26) (actual time=4.276..4.276 rows=0 loops=2)
                     Index Cond: (security_findings_2.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=8.506 write=0.000
               ->  Index Scan using security_findings_145_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_145 security_findings_3  (cost=0.57..123.41 rows=116 width=26) (actual time=4.348..4.348 rows=0 loops=2)
                     Index Cond: (security_findings_3.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=8.658 write=0.000
               ->  Index Scan using security_findings_146_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_146 security_findings_4  (cost=0.56..109.76 rows=103 width=26) (actual time=4.505..4.505 rows=0 loops=2)
                     Index Cond: (security_findings_4.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=8.973 write=0.000
               ->  Index Scan using security_findings_147_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_147 security_findings_5  (cost=0.57..103.46 rows=97 width=26) (actual time=3.869..3.869 rows=0 loops=2)
                     Index Cond: (security_findings_5.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=7.695 write=0.000
               ->  Index Scan using security_findings_148_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_148 security_findings_6  (cost=0.56..89.81 rows=84 width=26) (actual time=3.709..3.709 rows=0 loops=2)
                     Index Cond: (security_findings_6.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=7.385 write=0.000
               ->  Index Scan using security_findings_149_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_149 security_findings_7  (cost=0.57..110.81 rows=104 width=26) (actual time=3.878..3.878 rows=0 loops=2)
                     Index Cond: (security_findings_7.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=7.714 write=0.000
               ->  Index Scan using security_findings_150_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_150 security_findings_8  (cost=0.56..99.27 rows=93 width=26) (actual time=4.141..4.141 rows=0 loops=2)
                     Index Cond: (security_findings_8.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=8.248 write=0.000
               ->  Index Scan using security_findings_151_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_151 security_findings_9  (cost=0.57..101.36 rows=95 width=26) (actual time=4.015..4.016 rows=0 loops=2)
                     Index Cond: (security_findings_9.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=7.991 write=0.000
               ->  Index Scan using security_findings_152_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_152 security_findings_10  (cost=0.57..108.71 rows=102 width=26) (actual time=4.269..4.270 rows=0 loops=2)
                     Index Cond: (security_findings_10.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=8.485 write=0.000
               ->  Index Scan using security_findings_153_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_153 security_findings_11  (cost=0.57..108.71 rows=102 width=26) (actual time=4.320..4.320 rows=0 loops=2)
                     Index Cond: (security_findings_11.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=8.606 write=0.000
               ->  Index Scan using security_findings_154_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_154 security_findings_12  (cost=0.56..94.01 rows=88 width=26) (actual time=3.807..3.807 rows=0 loops=2)
                     Index Cond: (security_findings_12.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=7.576 write=0.000
               ->  Index Scan using security_findings_155_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_155 security_findings_13  (cost=0.56..130.76 rows=123 width=26) (actual time=4.425..4.426 rows=0 loops=2)
                     Index Cond: (security_findings_13.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=8.818 write=0.000
               ->  Index Scan using security_findings_156_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_156 security_findings_14  (cost=0.57..120.31 rows=113 width=26) (actual time=10.685..10.685 rows=0 loops=2)
                     Index Cond: (security_findings_14.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=21.324 write=0.000
               ->  Index Scan using security_findings_157_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_157 security_findings_15  (cost=0.57..91.91 rows=86 width=26) (actual time=3.932..3.932 rows=0 loops=2)
                     Index Cond: (security_findings_15.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=7
                     I/O Timings: read=7.824 write=0.000
               ->  Index Scan using security_findings_158_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_158 security_findings_16  (cost=0.57..96.14 rows=90 width=26) (actual time=5.735..5.737 rows=1 loops=2)
                     Index Cond: (security_findings_16.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=1 read=9
                     I/O Timings: read=11.405 write=0.000
               ->  Index Scan using security_findings_159_uuid_scan_id_partition_number_idx on gitlab_partitions_dynamic.security_findings_159 security_findings_17  (cost=0.56..14.18 rows=12 width=26) (actual time=3.665..3.665 rows=0 loops=2)
                     Index Cond: (security_findings_17.uuid = vulnerability_occurrences.uuid)
                     Buffers: shared hit=2 read=6
                     I/O Timings: read=7.289 write=0.000
Trigger trigger_468b8554e533 for constraint : time=0.213 calls=2
Settings: work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4'
  1. link here.
 ModifyTable on public.vulnerability_severity_overrides  (cost=0.00..0.03 rows=2 width=52) (actual time=6.684..6.712 rows=2 loops=1)
   Buffers: shared hit=96 read=9 dirtied=10 written=5
   WAL: records=15 fpi=0 bytes=1185
   I/O Timings: read=2.980 write=0.107
   ->  Values Scan on "*VALUES*"  (cost=0.00..0.03 rows=2 width=52) (actual time=3.021..3.033 rows=2 loops=1)
         Buffers: shared hit=12 read=5 dirtied=1
         WAL: records=1 fpi=0 bytes=99
         I/O Timings: read=2.869 write=0.000
Trigger RI_ConstraintTrigger_c_638998076 for constraint fk_rails_bbeaab8fb3: time=2.530 calls=2
Settings: work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4'
Edited by Gal Katz

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • 8 Warnings
    :warning: This merge request is quite big (776 lines changed), please consider splitting it into multiple merge requests.
    :warning: e1933395: Commits that change 30 or more lines across at least 3 files should describe these changes in the commit body. For more information, take a look at our Commit message guidelines.
    :warning: 6b807028: Commits that change 30 or more lines across at least 3 files should describe these changes in the commit body. For more information, take a look at our Commit message guidelines.
    :warning: 3c978625: Commits that change 30 or more lines across at least 3 files should describe these changes in the commit body. For more information, take a look at our Commit message guidelines.
    :warning: 2bab5a71: Commits that change 30 or more lines across at least 3 files should describe these changes in the commit body. For more information, take a look at our Commit message guidelines.
    :warning: b9964094: Commits that change 30 or more lines across at least 3 files should describe these changes in the commit body. For more information, take a look at our Commit message guidelines.
    :warning: 46200114: Commits that change 30 or more lines across at least 3 files should describe these changes in the commit body. For more information, take a look at our Commit message guidelines.
    :warning: This merge request has more than 20 commits which may cause issues in some of the jobs. If you see errors like missing commits, please consider squashing some commits so it is within 20 commits.
    2 Messages
    :book: CHANGELOG missing:

    If this merge request needs a changelog entry, add the Changelog trailer to the commit message you want to add to the changelog.

    If this merge request doesn't need a CHANGELOG entry, feel free to ignore this message.

    :book: This merge request adds or changes documentation files and requires Technical Writing review. The review should happen before merge, but can be post-merge if the merge request is time sensitive.

    Documentation review

    The following files require a review from a technical writer:

    The review does not need to block merging this merge request. See the:

    Reviewer roulette

    Category Reviewer Maintainer
    backend @jfypk profile link current availability (UTC+0) @schin1 profile link current availability (UTC+8)
    database @carlad-gl profile link current availability (UTC+11) @Quintasan profile link current availability (UTC+1)

    Please refer to documentation page for guidance on how you can benefit from the Reviewer Roulette, or use the GitLab Review Workload Dashboard to find other available reviewers.

    If needed, you can retry the :repeat: danger-review job that generated this comment.

    Generated by :no_entry_sign: Danger

    Edited by ****
  • Gal Katz added 1 commit

    added 1 commit

    • ede0a7da - Add vulnerability_occurrences severity update

    Compare with previous version

  • Gal Katz added 1 commit

    added 1 commit

    • 46200114 - Add vulnerability_severity_override wip feature flag

    Compare with previous version

  • added feature flag label

  • Gal Katz mentioned in issue #512397

    mentioned in issue #512397

  • Gal Katz added 1932 commits

    added 1932 commits

    Compare with previous version

  • Gal Katz marked this merge request as ready

    marked this merge request as ready

  • Gal Katz requested review from @rossfuhrman

    requested review from @rossfuhrman

  • Gal Katz mentioned in merge request !178167 (merged)

    mentioned in merge request !178167 (merged)

  • Gal Katz requested review from @morefice

    requested review from @morefice

  • rossfuhrman
  • rossfuhrman
  • Gal Katz added 2 commits

    added 2 commits

    • 85ae6ae0 - Fix CR comments
    • 041bd318 - Add security_findings severity update on vulnerability severity update

    Compare with previous version

  • Gal Katz changed the description

    changed the description

  • Max Orefice
  • Max Orefice
  • mentioned in issue #514098 (closed)

  • Gal Katz changed the description

    changed the description

  • rossfuhrman approved this merge request

    approved this merge request

  • rossfuhrman requested review from @minac

    requested review from @minac

  • added pipelinetier-2 label and removed pipelinetier-1 label

  • Before you set this MR to auto-merge

    This merge request will progress on pipeline tiers until it reaches the last tier: pipelinetier-3. We will trigger a new pipeline for each transition to a higher tier.

    Before you set this MR to auto-merge, please check the following:

    • You are the last maintainer of this merge request
    • The latest pipeline for this merge request is pipelinetier-3 (You can find which tier it is in the pipeline name)
    • This pipeline is recent enough (created in the last 8 hours)

    If all the criteria above apply, please set auto-merge for this merge request.

    See pipeline tiers and merging a merge request for more details.

  • E2E Test Result Summary

    allure-report-publisher generated test report!

    e2e-test-on-gdk: :white_check_mark: test report for e1933395

    expand test summary
    +-------------------------------------------------------------+
    |                       suites summary                        |
    +--------+--------+--------+---------+-------+-------+--------+
    |        | passed | failed | skipped | flaky | total | result |
    +--------+--------+--------+---------+-------+-------+--------+
    | Verify | 1      | 0      | 0       | 0     | 1     | ✅     |
    | Create | 7      | 0      | 0       | 0     | 7     | ✅     |
    | Govern | 8      | 0      | 0       | 0     | 8     | ✅     |
    | Plan   | 11     | 0      | 0       | 0     | 11    | ✅     |
    +--------+--------+--------+---------+-------+-------+--------+
    | Total  | 27     | 0      | 0       | 0     | 27    | ✅     |
    +--------+--------+--------+---------+-------+-------+--------+

    e2e-test-on-cng: :white_check_mark: test report for e1933395

    expand test summary
    +------------------------------------------------------------------+
    |                          suites summary                          |
    +-------------+--------+--------+---------+-------+-------+--------+
    |             | passed | failed | skipped | flaky | total | result |
    +-------------+--------+--------+---------+-------+-------+--------+
    | Data Stores | 33     | 0      | 10      | 0     | 43    | ✅     |
    | Plan        | 86     | 0      | 8       | 0     | 94    | ✅     |
    | Create      | 140    | 0      | 22      | 0     | 162   | ✅     |
    | Verify      | 52     | 0      | 20      | 0     | 72    | ✅     |
    | Govern      | 83     | 0      | 11      | 0     | 94    | ✅     |
    | Package     | 29     | 0      | 15      | 0     | 44    | ✅     |
    | Secure      | 2      | 0      | 5       | 0     | 7     | ✅     |
    | Monitor     | 8      | 0      | 12      | 0     | 20    | ✅     |
    | Manage      | 1      | 0      | 9       | 0     | 10    | ✅     |
    | Release     | 5      | 0      | 1       | 0     | 6     | ✅     |
    | Ai-powered  | 0      | 0      | 2       | 0     | 2     | ➖     |
    | Fulfillment | 2      | 0      | 7       | 0     | 9     | ✅     |
    | ModelOps    | 0      | 0      | 1       | 0     | 1     | ➖     |
    | Analytics   | 2      | 0      | 0       | 0     | 2     | ✅     |
    | Configure   | 0      | 0      | 3       | 0     | 3     | ➖     |
    | Growth      | 0      | 0      | 2       | 0     | 2     | ➖     |
    +-------------+--------+--------+---------+-------+-------+--------+
    | Total       | 443    | 0      | 128     | 0     | 571   | ✅     |
    +-------------+--------+--------+---------+-------+-------+--------+
    Edited by ****
  • Gal Katz added 1 commit

    added 1 commit

    • 6a8b9c5a - Enhance tests for better coverage

    Compare with previous version

  • Mehmet Emin INAC
  • rossfuhrman mentioned in merge request !178862 (merged)

    mentioned in merge request !178862 (merged)

  • rossfuhrman mentioned in merge request !178804

    mentioned in merge request !178804

  • Gal Katz added 1 commit

    added 1 commit

    Compare with previous version

  • Max Orefice approved this merge request

    approved this merge request

  • Max Orefice requested review from @bwill

    requested review from @bwill

  • added databasereviewed label and removed databasereview pending label

    • Resolved by Mehmet Emin INAC

      I see a few problems with updating the Security::Finding records:

      1. We have to scan all the partitions, which makes the query expensive. Our partitioning strategy makes it so that it is only efficient to access security findings for a single pipeline.
      2. Security::Findings are unique by (uuid, scan_id, partition_number), so there might be new records inserted for a UUID after we've updated the severity. These records would have the original severity and not the override.

      When resolving pipeline security report findings, we already get a ton of data from both Vulnerabilities and Vulnerabilities::Findings. When showing the pipeline security report, I wonder if we should use the severity from one of these tables instead, which could solve both problems.

      /cc @minac

  • Gal Katz added 2 commits

    added 2 commits

    • 5a0c1feb - Replace mutation prepare with validates
    • 7d2469c0 - Remove security finding severity update

    Compare with previous version

  • mentioned in issue #515811 (closed)

  • Gal Katz added 2830 commits

    added 2830 commits

    Compare with previous version

  • Brian Williams mentioned in merge request !178686 (merged)

    mentioned in merge request !178686 (merged)

  • Gal Katz added 489 commits

    added 489 commits

    Compare with previous version

  • Gal Katz changed milestone to %17.9

    changed milestone to %17.9

  • Brian Williams requested review from @minac

    requested review from @minac

  • added databaseapproved label and removed databasereviewed label

  • Brian Williams approved this merge request

    approved this merge request

  • Miranda Fluharty mentioned in merge request !180075

    mentioned in merge request !180075

  • Gal Katz requested review from @minac

    requested review from @minac

  • Mehmet Emin INAC
  • Mehmet Emin INAC
  • Gal Katz added 3 commits

    added 3 commits

    • 6ec0c8e9 - Add cleaning previous state transitions data when dismissing
    • 9782ca9f - Separate mutation definition in bulk severity override rspec
    • 46e592db - Remove redundant test

    Compare with previous version

  • Gal Katz reset approvals from @bwill by pushing to the branch

    reset approvals from @bwill by pushing to the branch

  • Gal Katz added 2 commits

    added 2 commits

    • 6b807028 - Refactor to remove template method pattern
    • 12fc63b2 - Replace prepare with validates

    Compare with previous version

  • Gal Katz added 1 commit

    added 1 commit

    Compare with previous version

  • Brian Williams
  • Brian Williams requested changes

    requested changes

  • Gal Katz added 1 commit

    added 1 commit

    Compare with previous version

  • Gal Katz requested review from @bwill

    requested review from @bwill

  • Brian Williams approved this merge request

    approved this merge request

  • Brian Williams resolved all threads

    resolved all threads

  • Brian Williams enabled automatic add to merge train when checks pass

    enabled automatic add to merge train when checks pass

  • merged

  • Brian Williams mentioned in commit 74170c54

    mentioned in commit 74170c54

  • Hello @gkatz1 :wave:

    The database team is looking for ways to improve the database review process and we would love your help!

    If you'd be open to someone on the database team reaching out to you for a chat, or if you'd like to leave some feedback asynchronously, just post a reply to this comment mentioning:

    @gitlab-org/database-team

    And someone will be by shortly!

    Thanks for your help! :heart:

    This message was generated automatically. Improve it or delete it.

  • added workflowstaging label and removed workflowcanary label

  • Gal Katz mentioned in merge request !178577 (merged)

    mentioned in merge request !178577 (merged)

  • Please register or sign in to reply
    Loading