Adjust the size of `max_batch_size` for `BackfillMemberNamespaceForGroupMembers` migration.
Production Change
Change Summary
Looking at the time of execution of BackfillMemberNamespaceForGroupMembers ({"timings": {"update_all": [0.259079348994419, 0.2915952540060971, 0.281289629987441, 0.06752952399256174]}}) we would like to adjust the max batch size to 5 000 to make it quicker. Otherwise, it will run for another days (looking that it runs for a week and we are at ~30%).
gitlabhq_dblab=# select id, created_at, max_value from batched_background_migration_jobs where batched_background_migration_id = 117 order by id desc limit 10;
id | created_at | max_value
-------+-------------------------------+-----------
47855 | 2022-03-08 08:11:01.896936+00 | 15742543
47854 | 2022-03-08 08:09:01.947898+00 | 15739546
we are just at id 15742543 (well at 08:11) and we need to reach 56647266
Another change would to fix the max_batch_size for the NullifyOrphanRunnerIdOnCiBuilds migration. We originally set this to 25000 but the correct value would be at least 100_000 - 150_000.
Change Details
- Services Impacted - GitLab Rails app
- Change Technician - @ahegyi
- Change Reviewer - @ahegyi, @pbair
- Time tracking - Time, in minutes, needed to execute all change steps, including rollback
- Downtime Component - None
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (1 min) - Estimated Time to Complete in Minutes
Run these commands in the rails console.
-
Gitlab::Database::BackgroundMigration::BatchedMigration.find(117).update(max_batch_size: 5_000) -
Gitlab::Database::BackgroundMigration::BatchedMigration.find(118).update(max_batch_size: 150_000)
Rollback
No rollback steps are needed.
Monitoring
Key metrics to observe
These changes will not affect the system immediately. The workspaces team and ahegyi will monitor the background migration execution.
Summary of infrastructure changes
-
Does this change introduce new compute instances? NO -
Does this change re-size any existing compute instances? NO -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc? NO
Summary of the above
Change Reviewer checklist
-
The scheduled day and time of execution of the change is appropriate. -
The change plan is technically accurate. -
The change plan includes estimated timing values based on previous testing. -
The change plan includes a viable rollback plan. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details). -
The change plan includes success measures for all steps/milestones during the execution. -
The change adequately minimizes risk within the environment/service. -
The performance implications of executing the change are well-understood and documented. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change. - If not, is it possible (or necessary) to make changes to observability platforms for added visibility? -
The change has a primary and secondary SRE with knowledge of the details available during the change window.
Change Technician checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncalland this issue and await their acknowledgement.) -
Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managersand this issue and await their acknowledgment.) -
There are currently no active incidents.