2023-03-21: Divert redis-ratelimiting traffic to redis-cluster-ratelimiting
Production Change
Change Summary
Using feature flags, we will cutover Redis traffic for rate-limiting related commands from redis-ratelimiting
service to redis-cluster-ratelimiting
service. This cutover process takes approximately 1 minute since that is the TTL for the Rails app in-memory cache. This change should be done during the quieter periods (2300-0500 UTC window) to minimise any possible user impacts.
redis-cluster-ratelimiting
gprd setup is summarised here - scalability#2256
Related scalability issue: scalability#2072
Feature-flag issue: gitlab-org/gitlab#385681 (closed)
Change Details
- Services Impacted - ServiceRedisClusterRateLimiting ServiceRedisRateLimiting
- Change Technician - @msmiley
- Change Reviewer - @igorwwwwwwwwwwwwwwwwwwww
- Time tracking - 30m
- Downtime Component - NA
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 30m
-
Set label changein-progress /label ~change::in-progress
-
Create a silence in AlertManager for the cause-based alert for redis-ratelimiting
db getting no client traffic:alert_class=traffic_cessation alert_type=cause type=redis-ratelimiting alertname=~RedisRatelimitingServiceRailsRedisClientTraffic(Cessation|Absent)
-
Run cutover chatops command for just 1% of traffic:
/chatops run feature set use_primary_store_as_default_for_rate_limiting 1 --random
-
Verify the plumbing works. Confirm the redis-cluster-ratelimiting dashboard starts showing non-zero rates for the normal workload's expected redis command types ( DEL
,GET
,INCR
, etc.). -
Fully enable using redis cluster for all ratelimiting traffic:
/chatops run feature set use_primary_store_as_default_for_rate_limiting true
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 30m
-
Run chatops command
/chatops run feature set use_primary_store_as_default_for_rate_limiting false
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: All Redis metrics for
redis-ratelimiting
- Location: https://dashboards.gitlab.net/d/redis-ratelimiting-main/redis-ratelimiting-overview?orgId=1
- What changes to this metric should prompt a rollback: NA. Traffic should fall for this dashboard.
-
Metric: All Redis metrics for
redis-cluster-ratelimiting
- Location: https://dashboards.gitlab.net/d/redis-cluster-ratelimiting-main/redis-cluster-ratelimiting-overview?orgId=1
- What changes to this metric should prompt a rollback: High (Hitting the 1h SLO outage rate) "Service Error Ratio" or SLIs (
cluster_redirections
,rails_redis_client
errors)
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Matt Smiley