[GSTG] Deprecate sidekiq namespaces
Production Change
Change Summary
This change will deprecate the resque:gitlab
namespace in redis-sidekiq
for gstg
environment. See &944 (closed).
This change is scheduled for 28 Aug 1300hr UTC.
Change Details
- Services Impacted - ServiceSidekiq
- Change Technician - @schin1
- Change Reviewer - @reprazent @alejandro
- Time tracking - 2h
- Downtime Component - NA
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (120 mins)
-
Set label changein-progress /label ~change::in-progress
Part 1: Get Sidekiq workers to poll namespaced and non-namespaced queues
-
Enable the environment variable SIDEKIQ_POLL_NON_NAMESPACED
via a k8s-workload pipeline (gitlab-com/gl-infra/k8s-workloads/gitlab-com!2964 (merged))- This should take ~4 minutes (referencing a past gstg rollout -- https://ops.gitlab.net/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/jobs/10909963)
-
Observe redis-sidekiq
and general sidekiq apdex + rps + error rates
Part 2: Get sidekiq clients to enqueue on non-namespaced queues + crons to use non-namespaced keys
-
Enable the environment variable SIDEKIQ_ENQUEUE_NON_NAMESPACED
gradually via a k8s-workload pipeline. Note that for shard deployments, crons need to be disabled except for the last deployment to ensure that there is only 1 group of cron pollers at any point in time.-
gstg-cny: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2981 (merged) -
us-east1-b + us-east1-c + us-east1-d: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2982 (merged) -
catchall with cron poller disabled: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2983 (merged) -
the rest: gitlab-com/gl-infra/k8s-workloads/gitlab-com!2984 (merged)
-
-
Observe the change in namespaced
labels for various sidekiq metrics- using thanos link - a drop in namespaced processes and a rise in non-namespaced processes should be seen
-
[Needs SRE assistance] Run sorted set migrations via console (https://gitlab.com/gitlab-com/gl-infra/scalability/-/snippets/2585313) if needed. - the migration of scheduled/dead/retry sets can be tracked via this thanos link
-
Enable the environment variable SIDEKIQ_ENQUEUE_NON_NAMESPACED
for the console and deployer VMs via a chef-repo MR (https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/3886) -
Verify that the namespaced queues are drained and no namespaced processes still exist -- using thanos link -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (60 mins)
-
Undo chef-repo MR -
Undo k8s-workload MR for the SIDEKIQ_ENQUEUE_NON_NAMESPACED
environment variable -
Wait til non-namespaced queues are drained before undoing k8s-workload MR for the SIDEKIQ_POLL_NON_NAMESPACED
environment variable. Or use https://gitlab.com/gitlab-com/gl-infra/scalability/-/snippets/2585313 to move jobs across namespaces -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric:
redis_primary_cpu
saturation ratio forredis-sidekiq
service- Location: https://dashboards.gitlab.net/d/redis-sidekiq-main/redis-sidekiq-overview?orgId=1
- What changes to this metric should prompt a rollback: if apdex ratio crosses 1h outage SLO or 6h degradation SLO
-
Metric: sidekiq execution & queuing apdex + rps
- Location: https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-shard=All
- What changes to this metric should prompt a rollback: error ratio spikes or if apdex ratio crosses 1h outage SLO or 6h degradation SLO
Non-dashboard links
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.