[GPRD][2nd Attempt] Shared state workload migration from redis to redis-cluster-shared-state
Production Change
Change Summary
This change issue migrates shared state workload in GitLab Rails out of ServiceRedis into the new ServiceRedisClusterSharedState. This is similar to our previous iteration for ServiceRedisClusterCache.
This is the 2nd attempt to migrate the shared state workload. We are blocked on 2 changes before this CR can proceed
- gitlab-org/gitlab!137952 (merged): this MR will reduce the client connections on ServiceRedisClusterSharedState
- gitlab-org/gitlab!137734 (merged): this MR fixes the bug in CI and other potential areas with non-idempotent block in Redis transactions and pipelines.
Tentative date scheduled: 6 Dec 2023 0100h UTC
Change Details
- Services Impacted - ServiceRedis ServiceRedisClusterSharedState
- Change Technician - @schin1
- Change Reviewer - @fshabir
- Time tracking - 2 days
- Downtime Component - NA
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (2 days)
-
Set label changein-progress /label ~change::in-progress
-
Run /chatops run feature set use_primary_and_secondary_stores_for_shared_state true
in chatops to start dual write
External migration
We can start the external script-based migration immediately while the dual-write is ongoing.
-
Set up environment in a console node using #15875 (comment 1440303927) as reference. -
To setup the folders:
# on local machine in the runbooks project
tar cvf migrate-script.tar renovate.json scripts/redis_diff.rb Gemfile scripts/redis_key_compare.rb
scp migrate-script.tar console-01-sv-gprd.c.gitlab-production.internal:/home/<username>
# in console node
tar xvf migrate-script.tar
bundle install # gem install if the node does not have bundle
- To set up go-based migration script (https://gitlab.com/gitlab-com/gl-infra/redis-migrator). The go-based migration script will be responsible for migrating and validating the data. But we keep the ruby script handy for single-key migrations.
go mod download
GOOS="linux" GOARCH="amd64" go build ./redis_migrator.go
scp redis_migrator console-01-sv-gprd.c.gitlab-production.internal:/home/<username>
Click to show config file setup
Both redis-go.yml
and redis-cluster-shared-state-go.yml
should be in the same level as the scripts
folder.
redis-go.yml
which should be symlinked as source.yml
. We can use replica nodes since we are only writing to the destination. Source is read-only.
# in redis-go.yml
url: redis-01-db-gprd.c.gitlab-production.internal:6379
password: $REDIS_REDACTED
redis-cluster-shared-state-go.yml
which should be symlinked as destination.yml
# in redis-cluster-shared-state-go.yml
nodes:
- redis-cluster-shared-state-shard-01-01-db-gprd.c.gitlab-production.internal:6379
- redis-cluster-shared-state-shard-01-02-db-gprd.c.gitlab-production.internal:6379
- redis-cluster-shared-state-shard-01-03-db-gprd.c.gitlab-production.internal:6379
- redis-cluster-shared-state-shard-02-01-db-gprd.c.gitlab-production.internal:6379
- redis-cluster-shared-state-shard-02-02-db-gprd.c.gitlab-production.internal:6379
- redis-cluster-shared-state-shard-02-03-db-gprd.c.gitlab-production.internal:6379
- redis-cluster-shared-state-shard-03-01-db-gprd.c.gitlab-production.internal:6379
- redis-cluster-shared-state-shard-03-02-db-gprd.c.gitlab-production.internal:6379
- redis-cluster-shared-state-shard-03-03-db-gprd.c.gitlab-production.internal:6379
password: REDIS_REDACTED
username: rails
The passwords can be found in
-
ServiceRedis:
gitlab_rails['redis_instance'] = "redis://:$REDIS_REDACTED@gprd-redis"
-
ServiceRedisClusterSharedState: created in vault as part of https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/redis/provisioning-redis-cluster.md#2-configure-gitlab-rails. check for
cluster_shared_state
Symlink the files as follow
ln -s redis-go.yml source.yml
ln -s redis-cluster-shared-state-go.yml destination.yml
- Use
screen
before running the validation script to allow other SREs access -
Run external diff script inside a console node. This syncs ServiceRedisClusterSharedState to ServiceRedis - Example:
./redis_migrator -migrate -rate 1000 -batch 300 -pool_size 300 | tee migrate-$(date +"%FT%T").out
- There are ~43M keys on gprd which should take ~3000s or ~12 hour for a full migration if the rate of 1000 key ops per second is maintained.
- Run
htop
/top
on the console node to monitor the pressure that the script is placing on it. - We can use a
max_allowed_rate.txt
file to dynamically update the rate to increase or decrease it as needed. This will let us migrate at a faster rate initially and reduce it as EMEA starts.
- Example:
External validation
-
After the migration is completed, before switching reads over, run the external diff script without migrate flags inside a console node to compare the difference - Example:
./redis_migrator -rate 2000 -batch 300 -pool_size 30 -verbose | tee validate-$(date +"%FT%T").out
- Example:
-
Run migration again if required since the migration process is eventually consistent ( ./redis_migrator -migrate -rate 1000 -batch 300 -pool_size 300
). Discrepancy could happen due to unfortunate races.
Alternatively, we could run the migration script again in verbose mode ./redis_migrator -rate 2000 -batch 300 -pool_size 30 -verbose -migrate | tee validate-$(date +"%FT%T").out
. This will flag out keys which differ and migrate them. The output file can be greped for differs
and the key types can be examined.
Handling persistently differing keys
Plan A: run migration sequentially. The migration script perform key read and write concurrently which could lead to timeouts in the event of massive keys. Most of such keys are experiment-related which have been cleaned up.
By running them sequentially using the redis_diff.rb
script, we reduce network bottlenecks and reduce the chance of timeouts. Use the following command:
grep Error <filename> | awk -F': ' '{printf "%s ", $2}' | tr -d '\n' | xargs bundle exec ruby scripts/redis_diff.rb --migrate --source=args {} | tee sequential-migrate-$(date +"%FT%T").out`
Read cutover and conclusion
-
Prepare an alert silence for type='redis'
andalert_type=traffic_cessation
. -
Once the diff between 2 stores are acceptable, switch the default store using the chat ops command: /chatops run feature set use_primary_store_as_default_for_shared_state true
-
If the metrics are in-order after ~30 minutes, we can stop the dual-write using the chat ops command: /chatops run feature set use_primary_and_secondary_stores_for_shared_state false
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
Note that if use_primary_store_as_default_for_shared_state
was not set to true
, only step 3 is needed for the rollback.
-
1. Switch the default store to ServiceRedis using /chatops run feature set use_primary_store_as_default_for_shared_state false
-
2. Run migration to sync ServiceRedis to ServiceRedisClusterSharedState. Ordering must be switch using: - Switch symlink such that
destination.yml -> redis.yml
andsource.yml -> redis-cluster-shared-state.yml
- Switch symlink such that
rm source.yml destination.yml
ln -s redis-go.yml destination.yml
ln -s redis-cluster-shared-state-go.yml source.yml
- `./redis_migrator --migrate --rate=1000 --batch=300 --pool_size=30 | tee $(date +"%FT%T").out`
-
3. Stop dual-write using the command: /chatops run feature set use_primary_and_secondary_stores_for_shared_state false
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: ServiceRedis apdex
- Location: https://dashboards.gitlab.net/d/redis-main/redis-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd
- What changes to this metric should prompt a rollback: if apdex falls below 1h outage threshold or sustains below 6h degradation threshold
-
Metric: ServiceRedisClusterSharedState apdex
- Location: https://dashboards.gitlab.net/d/redis-cluster-shared-state-main/redis-cluster-shared-state-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-shard=All
- What changes to this metric should prompt a rollback: if apdex falls below 1h outage threshold or sustains below 6h degradation threshold
Other non-dashboard metrics
- key count in ServiceRedisClusterSharedState: thanos link
- Monitor for pipeline diffs (https://log.gprd.gitlab.net/app/r/s/pKszC). However, it is expected for pipeline diff to be great initially since the migration target is empty/not-in-sync. This will reduce over time.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.