[GPRD] Sidekiq catchall shard migration to redis-sidekiq-catchall-a
Production Change
Change Summary
The change configures Gitlab.com to be ready for a sharded Sidekiq setup. See scalability#3173 (closed) for overview on the plan.
This CR focuses on enabling the feature flag gradually to cutover all workloads for the default
and mailers
queue. This effectively reduce the load on the catchall
Sidekiq k8s deployment.
Blocked on #17867 (closed)
Change Details
- Services Impacted - ServiceSidekiq ServiceRedisSidekiq
- Change Technician - @schin1
- Change Reviewer - @igorwwwwwwwwwwwwwwwwwwww
- Time tracking - 2h
- Downtime Component - na
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (120 mins)
-
Set label changein-progress /label ~change::in-progress
Undo worker migration
-
Disable sidekiq_route_to_queues_shard_catchall_a
feature flag to undo the migration. -
Wait ~10 minutes and observe that the jobs are not being routed to redis-sidekiq-catchall-a
- This lets us verify the ability to rollback safely. i.e. no missing jobs.
-
Merge k8s change: gitlab-com/gl-infra/k8s-workloads/gitlab-com!3519 (merged) -
Merge chef change: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/4629
Start migrating the catchall shard
-
Enable catchall feature flag to 1% and wait ~15 minutes before proceeding /chatops run feature set sidekiq_route_to_queues_shard_catchall_a 1 --random --ignore-random-deprecation-check
-
Observe changes during 15 minute wait -
Observe changes in redis activity
(seeKey metrics to observe
below) wherelpush
to theshard: catchall_a
increases andlpush
toshard: default
decreases slightly. This should change proportionately as the percentage enabled increases -
Observe an increase in scheduled
set size inshard: catchall_a
-
Observe an increase in job enqueues
andjob execution
metrics forshard: catchall_a
-
-
Enable catchall feature flag to 10% and wait ~15 minutes before proceeding /chatops run feature set sidekiq_route_to_queues_shard_catchall_a 10 --random --ignore-random-deprecation-check
-
Enable catchall feature flag to 20% and wait ~15 minutes before proceeding /chatops run feature set sidekiq_route_to_queues_shard_catchall_a 20 --random --ignore-random-deprecation-check
-
Wait ~15 minutes to observe the similar changes above -
Enable catchall feature flag to 50% and wait ~15 minutes before proceeding /chatops run feature set sidekiq_route_to_queues_shard_catchall_a 50 --random --ignore-random-deprecation-check
-
Wait ~15 minutes to observe the similar changes above -
Enable catchall feature flag to 75% and wait ~15 minutes before proceeding /chatops run feature set sidekiq_route_to_queues_shard_catchall_a 75 --random --ignore-random-deprecation-check
-
Wait ~15 minutes to observe the similar changes above -
Enable catchall feature flag to 100% /chatops run feature set sidekiq_route_to_queues_shard_catchall_a true
-
Wait ~15 minutes to observe the similar changes above -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (10 mins)
-
Disable the enable_sidekiq_shard_router
feature flag to stop routing jobs to thecatchall_a
shard./chatops run feature set enable_sidekiq_shard_router false --staging
- The remaining jobs in the
catchall_a
shard will be removed by thecatchall_migrator
Sidekiq deployment since those Sidekiq process will have itsSidekiq.redis
configured to useredis-sidekiq-catchall-a
. More details in scalability#3173 (comment 1862742026). - Scheduled jobs in each Redis will be polled by the respective deployments.
- Monitor the
scheduled job sizes
until it goes to zero
- The remaining jobs in the
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: queue size
- Location: thanos
- What changes to this metric should prompt a rollback: when the
mailers
anddefault
queue in thedefault
shard is still growing after the feature flag is fully enabled.
-
Metric: redis activity
- Location: thanos
- What changes to this metric should prompt a rollback: when the proportion of lpush and brpop does not match the feature flag toggle. e.g. if workload has moved to the new shard, we should see an increase in lpush on the
catchall_a
shard.
-
Metric: scheduled job sizes
- Location: thanos
- What changes to this metric should prompt a rollback: if
catchall_a
shard metrics does not increase after the feature flag is enabled. Note: mileage may vary for gstg since we may not have much scheduled jobs. I'd advise to wait or even ramp up the % of rollout.
-
Metric: retry job sizes
- Location: thanos
- What changes to this metric should prompt a rollback: the new shard metric should increase and stabilise while the default shard metric should decrease and stabilise.
-
Metric: job enqueues
- Location: thanos
- What changes to this metric should prompt a rollback: Any disproportional looking increases or dips.
-
Metric: job execution
- Location: thanos
- What changes to this metric should prompt a rollback: Any disproportional looking increases or dips.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.