[GSTG] Shared state workload migration from redis to redis-cluster-shared-state

Production Change

Change Summary

This change issue migrates shared state workload in GitLab Rails out of ServiceRedis into the new ServiceRedisClusterSharedState. This is similar to our previous iteration for ServiceRedisClusterCache.

Scheduled date: 8 Nov 0000h UTC (8am SG, 4pm Seattle). Pending gitlab-org/gitlab!134483 (merged) and gitlab-org/gitlab!133790 (merged).

Change Details

  1. Services Impacted - ServiceRedis ServiceRedisClusterSharedState
  2. Change Technician - @schin1 @marcogreg
  3. Change Reviewer - @stejacks-gitlab
  4. Time tracking - 1.5h
  5. Downtime Component - N.A.

Set Maintenance Mode in GitLab

If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (120 mins)

  • Set label changein-progress /label ~change::in-progress
  • Run /chatops run feature set use_primary_and_secondary_stores_for_shared_state true --staging in chatops to start dual write

External migration

# on local machine in the runbooks project

tar cvf migrate-script.tar renovate.json scripts/redis_diff.rb Gemfile scripts/redis_key_compare.rb
scp migrate-script.tar gstg-console:/home/<username>

# in console node
tar xvf migrate-script.tar
bundle install # gem install if the node does not have bundle
Click to show config file setup

redis.yml which should be symlinked as source.yml. We can use replica nodes since we are only writing to the destination. Source is read-only.

url: redis://:$REDIS_REDACTED@redis-01-db-gstg.c.gitlab-staging-1.internal:6379

redis-cluster-shared-state.yml which should be symlinked as destination.yml

nodes:
  - host: redis-cluster-shared-state-shard-01-01-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-shared-state-shard-01-02-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-shared-state-shard-01-03-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-shared-state-shard-02-01-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-shared-state-shard-02-02-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-shared-state-shard-02-03-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-shared-state-shard-03-01-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-shared-state-shard-03-02-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-shared-state-shard-03-03-db-gstg.c.gitlab-staging-1.internal
    port: 6379
password: REDIS_REDACTED
username: rails

The passwords can be found in

Symlink the files as follow

ln -s redis.yml source.yml 
ln -s redis-cluster-shared-state.yml destination.yml

Define this envvars:

export REDIS_CLIENT_SLOW_COMMAND_TIMEOUT=10
export REDIS_CLIENT_MAX_STARTUP_SAMPLE=1
  • Use screen before running the validation script to allow other SREs access
  • Run external diff script inside a console node. This syncs ServiceRedisClusterSharedState to ServiceRedis
    • Example: bundle exec ruby redis_diff.rb --migrate --rate=1000 --batch=300 --pool_size=30 | tee migrate-$(date +"%FT%T").out
    • There are ~3M keys on gstg which should take ~3000s or ~1 hour for a full migration if the rate of 1000 key ops per second is maintained.
    • Run htop/top on the console node to monitor the pressure that the script is placing on it.

External validation

  • After the migration is completed, before switching reads over, run the external diff script without migrate flags inside a console node to compare the difference
    • Example: bundle exec ruby redis_diff.rb --rate=1000 --batch=300 --pool_size=30 | tee validate-$(date +"%FT%T").out
  • Run migration again if required since the migration process is eventually consistent (bundle exec ruby redis_diff.rb --migrate --rate=1000 --batch=300 --pool_size=30). Discrepancy could happen due to unfortunate races. In this CR for gstg, we will aim to detect such edge-case behaviour before actually attempting gprd.

Read cutover and conclusion

  • Once the diff between 2 stores are acceptable, switch the default store using the chat ops command:
    • /chatops run feature set use_primary_store_as_default_for_shared_state true --staging
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (60 mins)

Note that if use_primary_store_as_default_for_shared_state was not set to true, only step 3 is needed for the rollback.

  • 1. Switch the default store to ServiceRedis using /chatops run feature set use_primary_store_as_default_for_shared_state false --staging
  • 2. Run migration to sync ServiceRedis to ServiceRedisClusterSharedState. Ordering must be switch using:
    • Switch symlink such that destination.yml -> redis.yml and source.yml -> redis-cluster-shared-state.yml
rm source.yml destination.yml
ln -s redis.yml destination.yml 
ln -s redis-cluster-shared-state.yml source.yml
- `bundle exec ruby redis_diff.rb --migrate --rate=1000 --batch=300 --pool_size=30 | tee $(date +"%FT%T").out`
  • 3. Stop dual-write using the command: /chatops run feature set use_primary_and_secondary_stores_for_shared_state false --staging
  • Set label changeaborted /label ~change::aborted

Monitoring

Key metrics to observe

Other non-dashboard metrics

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary.

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Sylvester Chin