[GPRD] Deprecate sidekiq namespaces
Production Change
Change Summary
This change will deprecate the resque:gitlab
namespace in redis-sidekiq
for gprd
environment. See &944 (closed). A similar CR was performed for gstg (#16172 (closed)).
The tentative date for this CR would be 5th Sep 0100h UTC.
Change Details
- Services Impacted - ServiceSidekiq ServiceRedisSidekiq
- Change Technician - @schin1
- Change Reviewer - @igorwwwwwwwwwwwwwwwwwwww
- Time tracking - 3 hours
- Downtime Component - na
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (180 mins)
-
Set label changein-progress /label ~change::in-progress
Part 1: Get sidekiq clients to enqueue on non-namespaced queues + crons to use non-namespaced keys
-
Enable the environment variable SIDEKIQ_ENQUEUE_NON_NAMESPACED
gradually via a k8s-workload pipeline. Note that for shard deployments, crons need to be disabled except for the last deployment to ensure that there is only 1 group of cron pollers at any point in time.-
gstg-cny: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2993 (merged) -
us-east1-b: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2994 (merged) -
us-east1-c + us-east1-d: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2995 (merged) -
sidekiq low-urgency-cpu-bound
shard with cron poller disabled (this is a small sidekiq shard to function as a "canary" for sidekiq"): gitlab-com/gl-infra/k8s-workloads/gitlab-com!2996 (merged) -
catchall
shard with cron poller disabled: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2997 (merged) -
urgent-other
shard with cron poller disabled: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2998 (merged) -
the rest: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2999 (merged)
-
-
Observe the change in namespaced
labels for various sidekiq metrics- using thanos link - a drop in namespaced processes and a rise in non-namespaced processes should be seen
Part 2: Update chef-controlled nodes
-
Enable the environment variable SIDEKIQ_ENQUEUE_NON_NAMESPACED
for the console and deployer VMs via a chef-repo MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/3937 -
Verify that the namespaced queues are drained and no namespaced processes still exist
Part 3: Migrate the remainder of scheduled and retry jobs
Optionally we could wait on this to let it naturally drain until it plateaus (long term jobs). But running the snippet will give us a better idea of processing time which helps us craft a more operationally-friendly post-deployment migration.
-
[Needs SRE assistance] Run sorted set migrations via console (https://gitlab.com/gitlab-com/gl-infra/scalability/-/snippets/2585313) if needed. - the migration of scheduled/dead/retry sets can be tracked via this thanos link
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (120mins)
-
Undo chef-repo MR -
Undo k8s-workload MR for the SIDEKIQ_ENQUEUE_NON_NAMESPACED
environment variable -
Wait til non-namespaced queues are drained before undoing k8s-workload MR for the SIDEKIQ_POLL_NON_NAMESPACED
environment variable. Or use https://gitlab.com/gitlab-com/gl-infra/scalability/-/snippets/2585313 to move jobs across namespaces -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric:
redis_primary_cpu
saturation ratio forredis-sidekiq
service- Location: https://dashboards.gitlab.net/d/redis-sidekiq-main/redis-sidekiq-overview?orgId=1
- What changes to this metric should prompt a rollback: if apdex ratio crosses 1h outage SLO or 6h degradation SLO
-
Metric: sidekiq execution & queuing apdex + rps
- Location: https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-shard=All
- What changes to this metric should prompt a rollback: error ratio spikes or if apdex ratio crosses 1h outage SLO or 6h degradation SLO
Non-dashboard links
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.