[gstg] Add new nodes to redis-cluster-ratelimiting
Staging Change
Change Summary
This change request adds a new shard (master with its slaves) to an existing gstg
cluster. In this case, we are targeting redis-cluster-ratelimiting. Migration of key(slot)s will happen through a followup CR, once node addition steps are verified and documented in the runbook.
Change Details
- Services Impacted - ServiceRedisClusterRateLimiting
- Change Technician - @fshabir
- Change Reviewer - @schin1
- Time tracking - 1 hour
- Downtime Component - No downtime expected
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 1 hour
-
Set label changein-progress /label ~change::in-progress
-
Merge MR to create chef roles for new shard: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/4639 -
Merge MR to provision servers for new shard: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/8224 -
Wait until new servers are bootstrapped and lookout for successful completion of startup-script.sh
in the serial port logs of new servers
gcloud compute --project=gitlab-staging-1 instances tail-serial-port-output redis-cluster-ratelimiting-shard-04-01-db-gstg --zone us-east1-c | grep 'startup-script.sh'
gcloud compute --project=gitlab-staging-1 instances tail-serial-port-output redis-cluster-ratelimiting-shard-04-02-db-gstg --zone us-east1-d | grep 'startup-script.sh'
gcloud compute --project=gitlab-staging-1 instances tail-serial-port-output redis-cluster-ratelimiting-shard-04-03-db-gstg --zone us-east1-b | grep 'startup-script.sh'
-
Add new master ( redis-cluster-ratelimiting-shard-04-01-db-gstg.c.gitlab-staging-1.internal
) and its replicas to the redis cluster by executing following viaredis-cli
on first node in the cluster:
ssh redis-cluster-ratelimiting-shard-01-01-db-gstg.c.gitlab-staging-1.internal
export ENV=gstg
export PROJECT=gitlab-staging-1
export DEPLOYMENT=redis-cluster-ratelimiting
# This adds master node
sudo gitlab-redis-cli --cluster add-node \
$DEPLOYMENT-shard-04-01-db-$ENV.c.$PROJECT.internal:6379 \
$DEPLOYMENT-shard-01-01-db-$ENV.c.$PROJECT.internal:6379
# This add slave nodes
for j in {02,03}; do
node_id="$(sudo gitlab-redis-cli cluster nodes | grep $DEPLOYMENT-shard-04-01-db-$ENV.c.$PROJECT.internal | awk '{ print $1 }')";
sudo gitlab-redis-cli --cluster add-node \
$DEPLOYMENT-shard-04-$j-db-$ENV.c.$PROJECT.internal:6379 \
$DEPLOYMENT-shard-01-01-db-$ENV.c.$PROJECT.internal:6379 \
--cluster-slave --cluster-master-id $node_id
done
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - None
-
At this stage, newly added nodes will have no keyslots assigned to them, so no rollback will be required. -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: redis-cluster-ratelimiting Apdex and RPS
- Location: https://dashboards.gitlab.net/d/redis-cluster-ratelimiting-main/redis-cluster-ratelimiting3a-overview?orgId=1&var-PROMETHEUS_DS=PA258B30F88C30650&var-environment=gstg&var-shard=All
- What changes to this metric should prompt a rollback: Consistent degradation in Apdex
-
Metric: Cluster Data graphs showing cluster state summary
- Location: https://dashboards.gitlab.net/d/redis-cluster-ratelimiting-main/redis-cluster-ratelimiting3a-overview?orgId=1&var-PROMETHEUS_DS=PA258B30F88C30650&var-environment=gstg&var-shard=All
- What changes to this metric should prompt a rollback: Non-zero
Redis Cluster Slots Failed
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Furhan Shabir