[GPRD] Migrate exclusive lease from Redis persistent to redis-cluster-shared-state
Production Change
Change Summary
This change migrates ExclusiveLease workload from ServiceRedis to ServiceRedisClusterSharedState in gprd
. This is part of the epic &1094 (closed). This CR is created as it requires SRE's assistance in performing a rolling restart on certain GKE deployments.
The CR should be performed during morning APAC hours (least traffic) to minimize risks during rollout.
The feature-flag issue is in gitlab-org/gitlab#421156 (closed).
Previous attempt on gstg #16321 (closed)
Scheduled for Wednesday, 2023-10-04 01:00 UTC
Change Details
- Services Impacted - ServiceRedis ServiceSidekiq ServiceRedisClusterSharedState
- Change Technician - @marcogreg
- Change Reviewer - @igorwwwwwwwwwwwwwwwwwwww
- Time tracking - 1h
- Downtime Component - NA
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 60 minutes
Pre-checks
-
Ensure that our monitoring stack (Grafana/Thanos) is in good condition, i.e. no missing data/slowness.
Enable dual lock/writes
-
Set label changein-progress /label ~change::in-progress
-
Enable feature flag to start double lock (writes) in % rollout. We'll only do 10% -> 100% directly, 10% rollout allows us to safely check if anything goes wrong. - We use % of actors because % of time has been deprecated via ChatOps. The actor is of a current request actor, which has random ID for each request/Sidekiq execution. The current request actor will be supported in gitlab-org/gitlab!132843 (merged)
-
/chatops run feature set enable_exclusive_lease_double_lock_rw 10 --actors
-
Wait for a few minutes to monitor ServiceRedisClusterSharedState and ServiceSidekiq -
/chatops run feature set enable_exclusive_lease_double_lock_rw 100 --actors
-
Wait 15-30 minutes to ensure exclusive-lease is stable when operating with dual-locks -
[Needs SRE assistance] Trigger rolling restart on sidekiq deployment, delayed by a few minutes on each deployment. - Why only sidekiq deployments? This is to ensure any long-running processes that acquired the lease before the ff is enabled geets interrupted, leading to the lease on a single store being yielded and re-acquired (by any processes) on both stores. See more details in gitlab-org/gitlab#421156 (comment 1549082351)
- webservice deployments are not expected to hold locks for over a few seconds, hence a 15-30 minute wait + additional time for sidekiq deployment restart is enough to ensure that all processes using ExclusiveLease on webservice pods are acquiring locks on both stores.
# ensure kubectl is currently in gprd-gitlab-gke cluster
# delay the restarts by a few minutes to prevent too many sidekiq pods down at the same time
k -n gitlab rollout restart deployments/gitlab-sidekiq-catchall-v2
k -n gitlab rollout restart deployments/gitlab-sidekiq-database-throttled-v2
k -n gitlab rollout restart deployments/gitlab-sidekiq-elasticsearch-v2
k -n gitlab rollout restart deployments/gitlab-sidekiq-gitaly-throttled-v2
k -n gitlab rollout restart deployments/gitlab-sidekiq-low-urgency-cpu-bound-v2
k -n gitlab rollout restart deployments/gitlab-sidekiq-memory-bound-v2
k -n gitlab rollout restart deployments/gitlab-sidekiq-quarantine-v2
k -n gitlab rollout restart deployments/gitlab-sidekiq-urgent-authorized-projects-v2
k -n gitlab rollout restart deployments/gitlab-sidekiq-urgent-cpu-bound-v2
k -n gitlab rollout restart deployments/gitlab-sidekiq-urgent-other-v2
Cutover reads
-
Enable feature flag to cutover reads from new cluster in % rollout -
/chatops run feature set use_cluster_shared_state_for_exclusive_lease --actors 10
-
Wait for a few minutes to monitor ServiceRedisClusterSharedState and ServiceSidekiq -
/chatops run feature set use_cluster_shared_state_for_exclusive_lease --actors 100
-
-
Observe and monitor ServiceSidekiq execution SLIs, ServiceRedis and ServiceRedisClusterSharedState apdex to be stable. -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (20 mins)
-
Disable feature flag /chatops run feature set use_cluster_shared_state_for_exclusive_lease false
-
Trigger a rolling restart on sidekiq deployment. This is to ensure that all processes now obtain dual-locks for a safe rollback when disabling enable_exclusive_lease_double_lock_rw
. -
Disable feature flag /chatops run feature set enable_exclusive_lease_double_lock_rw false
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: ServiceRedis Apdex
- Location: https://dashboards.gitlab.net/d/redis-main/redis-overview?orgId=1&var-PROMETHEUS_DS=Global
- What changes to this metric should prompt a rollback: apdex dipping below 1h outage threshold or sustaining below 6h degradation threshold
-
Metric: ServiceRedisClusterSharedState Apdex
- Location: https://dashboards.gitlab.net/d/redis-cluster-shared-state-main/redis-cluster-shared-state-overview?orgId=1&refresh=5m&from=now-30m&to=now&var-PROMETHEUS_DS=Global&var-shard=All
- What changes to this metric should prompt a rollback: apdex dipping below 1h outage threshold or sustaining below 6h degradation threshold
-
Metric: ServiceSidekiq Request rate and error rates
- Location: https://dashboards.gitlab.net/d/sidekiq-main/sidekiq-overview?orgId=1&var-PROMETHEUS_DS=Global&var-stage=main&var-shard=All
- What changes to this metric should prompt a rollback: apdex dipping below 1h outage threshold or sustaining below 6h degradation threshold
-
Logs: Monitor the rate of lock acquisition failure (
FailedToObtainLockError
) we can roughly estimate this by checking sidekiq logs forlock
keyword in error messages.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.