Skip to content

2025-02-12 [GPRD] Configure application for using redis-cluster-sessions in gprd

Production Change

Change Summary

Configuration vaults and rails application in VMs and k8s to use newly provisioned redis-cluster-sessions in gprd.

This is part of migrating ServiceRedisSessions to ServiceRedisClusterSessions (from Redis Sentinel to Redis Cluster).

gstg migration has been done in #19171 (closed)

Reference issue: gitlab-com/gl-infra/data-access/durability/team#35 (closed)

Change Details

  1. Services Impacted - ServiceRedisSessions / ServiceRedisClusterSessions
  2. Change Technician - @fshabir @marcogreg
  3. Change Reviewer - @fshabir
  4. Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - (Phase 1) 2025-02-10 03:30 - (Phase 2) 2025-02-19 06:00
  5. Time tracking - 120 minutes
  6. Downtime Component - none

Set Maintenance Mode in GitLab

If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 120 minutes

Phase 1

  • Set label changein-progress /label ~change::in-progress
  • Ensure the prerequisite CR #19144 (closed) to use Redis Cache Store as Session Store has been completed.
  • Add redis password to hashi vault for omnibus gitlab chef based instances deployed to VMs
glsh vault proxy

export VAULT_PROXY_ADDR="socks5://localhost:18200"
glsh vault login
vault kv get -format=json chef/env/gprd/shared/gitlab-omnibus-secrets | jq '.data.data' > data.json
cat data.json | jq --arg PASSWORD <RAILS_REDACTED> '."omnibus-gitlab".gitlab_rb."gitlab-rails".redis_yml_override.cluster_sessions.password = $PASSWORD' > data.json.tmp
diff -u data.json data.json.tmp
mv data.json.tmp data.json
vault kv patch chef/env/gprd/shared/gitlab-omnibus-secrets @data.json
rm data.json
[ gprd ] production> Gitlab::Redis::ClusterSessions.with{|c| c.ping}
=> "PONG"
[ gprd ] production>
  • (Done in advance while setting up gstg to avoid bumping external secret version) Update hashivault with redis cluster password for rails, to be used by k8s deployments:
vault kv put k8s/env/gprd/ns/gitlab/redis-cluster-sessions-rails password=<RAILS_REDACTED>
  • Test login in gitlab.com

Migrate keys with redis_diff.rb script. Since there are ~15 millions keys in gprd, we can start running the migration script concurrently while letting the dual write continue for 1 week (TTL).

  • To setup the folders:
# on local machine in the runbooks project

tar cvf migrate-script.tar renovate.json scripts/redis_diff.rb Gemfile scripts/redis_key_compare.rb

scp migrate-script.tar console-01-sv-gprd.c.gitlab-gstg.internal:/home/<username>

# in console node
tar xvf migrate-script.tar
bundle install # gem install if the node does not have bundle

Both redis-sessions.yml and redis-cluster-sessions.yml should be in the same level as the scripts folder.

redis-sessions.yml which should be symlinked as source.yml. We can use replica nodes since we are only writing to the destination. Source is read-only.

# in redis-sessions.yml
url: redis://:$REDIS_REDACTED@redis-sessions-01-db-gprd.c.gitlab-production.internal:26379

redis-cluster-sessions.yml which should be symlinked as destination.yml

# in redis-cluster-sessions.yml
nodes:
  - host: redis-cluster-sessions-shard-01-01-db-gprd.c.gitlab-production.internal
    port: 6379
  - host: redis-cluster-sessions-shard-01-02-db-gprd.c.gitlab-production.internal
    port: 6379
  - host: redis-cluster-sessions-shard-01-03-db-gprd.c.gitlab-production.internal
    port: 6379
  - host: redis-cluster-sessions-shard-02-01-db-gprd.c.gitlab-production.internal
    port: 6379
  - host: redis-cluster-sessions-shard-02-02-db-gprd.c.gitlab-production.internal
    port: 6379
  - host: redis-cluster-sessions-shard-02-03-db-gprd.c.gitlab-production.internal
    port: 6379
  - host: redis-cluster-sessions-shard-03-01-db-gprd.c.gitlab-production.internal
    port: 6379
  - host: redis-cluster-sessions-shard-03-02-db-gprd.c.gitlab-production.internal
    port: 6379
  - host: redis-cluster-sessions-shard-03-03-db-gprd.c.gitlab-production.internal
    port: 6379
password: REDIS_REDACTED
username: rails

The passwords can be found in

Symlink the files as follow

ln -s redis-sessions.yml source.yml 
ln -s redis-cluster-sessions.yml destination.yml
  • Run script to start migration
bundle exec ruby redis_diff.rb --migrate --rate=1000 --batch=300 --pool_size=30 | tee migrate-$(date +"%FT%T").out
  • Wait for 1 week (TTL of all sessions in Redis) to allow dual-writes to migrate all data. Resume only after 2025-02-19 06:00

Phase 2

  • After 1 week, rerun redis_diff.rb script to check for any key diffs.
  • Enable feature flag use_primary_store_as_default_for_sessions to switch reads to the Redis Cluster
    • /chatops run feature set use_primary_store_as_default_for_sessions true
  • Test login in gitlab.com
  • Wait for at least 1 day to see if any exceptions/reported errors.
  • Disable feature flag use_primary_and_secondary_stores_for_sessions to stop dual-write and end migration
    • /chatops run feature set use_primary_and_secondary_stores_for_sessions false
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 30 mins

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary.

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue. Mention @gitlab-org/saas-platforms/inframanagers in this issue to request approval and provide visibility to all infrastructure managers.
    • Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Furhan Shabir