[Production] Stop MigrateSharedVulnerabilityScanners background migration

Production Change

Change Summary

We want to stop MigrateSharedVulnerabilityScanners migration as it is blocking other background migrations and it seems like it will finish in ~2 years if not stopped.

From https://gitlab.slack.com/archives/CU9V380HW/p1663404893276689:

gitlab-org/gitlab!89127 (merged) migration is going to take way more than we expected 🤔 It is expected to migration ~170K records and we even added a specialized index for that, however it seems to be scanning the 200M vulnerability_occurrences table in batches of 1K, with an interval of 5 minutes, which by my rough estimations is going to take: 200_000_000 / 1_000 * 5 / 60 / 24 = ~694 days 😬

This migration should be visible in admin panel: https://docs.gitlab.com/ee/update/index.html#check-the-status-of-batched-background-migrations, and we should follow the docs (https://docs.gitlab.com/ee/update/index.html#what-do-you-do-if-your-background-migrations-are-stuck)

Change Details

  1. Services Impacted - ServicePostgres ServiceSidekiq
  2. Change Technician - @ahanselka @bshah11
  3. Change Reviewer - @thiagocsf
  4. Time tracking - 30 minutes
  5. Downtime Component - No

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 30 minutes

  • Set label changein-progress /label ~change::in-progress
  • Pause the batched_background_migration with id 180 and job_class_name MigrateSharedVulnerabilityScanners.
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes

  • Unpause the batched_background_migration with id 180 and job_class_name MigrateSharedVulnerabilityScanners.
  • Set label changeaborted /label ~change::aborted

Monitoring

Key metrics to observe

These changes will not affect the system immediately. We need to observe if other background migrations are not stuck: https://docs.gitlab.com/ee/update/index.html#check-the-status-of-batched-background-migrations

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Alan (Maciej) Paruszewski