[GSTG] Clean up Sidekiq catchall shard migration
Production Change
Change Summary
This change issue continues from #17841 (closed). It updates the catchall
k8s deployment to poll from the new redis-sidekiq-catchall-a
and drops the deployment used for migration.
Change Details
- Services Impacted - ServiceSidekiq
- Change Technician - @schin1
- Change Reviewer - @fshabir
- Time tracking - 45 minutes
- Downtime Component - NA
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (40 mins)
-
Set label changein-progress /label ~change::in-progress
-
Merge gitlab-com/gl-infra/k8s-workloads/gitlab-com!3537 (merged) -
Monitor job completion metric to see update in shard: catchall
-- metric -
Wait 5-10 minutes to make sure sidekiq apdex is healthy -
Merge gitlab-com/gl-infra/k8s-workloads/gitlab-com!3538 (merged) -
Wait 5-10 minutes to make sure sidekiq apdex is healthy -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (30 mins)
-
Rollback gitlab-com/gl-infra/k8s-workloads/gitlab-com!3538 (merged) to restore the migrator shard -
Rollback gitlab-com/gl-infra/k8s-workloads/gitlab-com!3537 (merged) to reconfigure the catchall shard -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: queue length for default and mailers
- Location: thanos
- What changes to this metric should prompt a rollback: if the queues on
shard=default
increases, investigate the cause and possibly rollback since that is a routing issue on the application.
-
Metric: Sidekiq apdex
- Location: https://dashboards.gitlab.net/d/sidekiq-main/sidekiq3a-overview?orgId=1&from=now-1h&to=now&var-PROMETHEUS_DS=PA258B30F88C30650&var-environment=gstg&var-stage=main&var-shard=catchall&var-shard=urgent-other
- What changes to this metric should prompt a rollback: if apdex dips below outage threshold or sustains below degradation threshold
-
Metric: redis activity
- Location: thanos
- What changes to this metric should prompt a rollback: There should be no change in routing patterns so the load on each Redis should be the same.
-
Metric: job execution
- Location: thanos
- What changes to this metric should prompt a rollback: After the first MR merge, we should see
catchall
starting to complete jobs. If it remains flat, investigate and possibly rollback the MR.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Sylvester Chin