Increase concurrency for container registry data repair worker from 4 to 8
Production Change
Change Summary
Earlier this month (June 2023), we have added a new worker, ContainerRegistry::RecordDataRepairDetailWorker that goes through all projects and for every project, it checks with the container registry if there are missing container repositories with tags and records the counts in the table ContainerRegistry::DataRepairDetail.
The previous increase from 2 to 4 was done in #15935 (closed).
We have seen good progress so far, the worker has scanned 3.5% of all projects. Given the current velocity and that the load on the container registry API is fairly low (1 req/s), we would like to bump the current concurrency from 4 to 8.
Change Details
- Services Impacted - ServiceGitLab Rails ServiceContainer Registry
- Change Technician - @reprazent
- Change Reviewer - @reprazent
- Time tracking - 5 minutes
- Downtime Component - N/A
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (3 mins)
-
Set label changein-progress /label ~change::in-progress -
Open a production Rails console session with write access -
Double check that the container_registry_data_repair_detail_worker_max_concurrencyapplication setting is set to4. If not, report back before proceeding:Gitlab::CurrentSettings.current_application_settings.container_registry_data_repair_detail_worker_max_concurrency => 4
-
Update setting to 8with the following command:Gitlab::CurrentSettings.current_application_settings.update!(container_registry_data_repair_detail_worker_max_concurrency: 8)
-
Verify that the update was successful: Gitlab::CurrentSettings.current_application_settings.container_registry_data_repair_detail_worker_max_concurrency => 8 -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (2 mins)
-
Update setting back to 4with the following command:Gitlab::CurrentSettings.current_application_settings.update!(container_registry_data_repair_detail_worker_max_concurrency: 4) -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: Max number of worker running jobs
- Location: Thanos
- What changes to this metric should prompt a rollback: Once the change is executed, the max reported running jobs should increase to
8. If it decreases or becomes 0 then something went wrong and the change should be rolled back. We expect the max to also increase as we increase the concurrency as there are still a lot of projects to be processed so we expect the worker to scale up with more workers as soon as we allow it in the settings.
-
Metric: Load on the container registry API
- Location: Thanos
- What changes to this metric should prompt a rollback: We expect this metric to increase once the change is executed. If it becomes 0 then our worker has stopped querying the container registry and something must have gone wrong. A steep increase (more than double/triple) can also signal that something went wrong and the change should be rolled back.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.