[GSTG] Configure application for using redis-cluster-sessions in gstg
Production Change
Change Summary
Configuration vaults and rails application in VMs and k8s to use newly provisioned redis-cluster-sessions in gstg.
This is part of migrating ServiceRedisSessions to ServiceRedisClusterSessions (from Redis Sentinel to Redis Cluster).
Reference issue: gitlab-com/gl-infra/data-access/durability/team#35 (closed)
Change Details
- Services Impacted - ServiceRedisSessions / ServiceRedisClusterSessions
- 
Change Technician - @fshabir
- Change Reviewer - @marcogreg
- Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2025-01-28 03:30 (1st round) 2025-02-06 06:00 (2nd round)
- Time tracking - 120 minutes
- Downtime Component - none
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 120 minutes
- 
Set label changein-progress /label ~change::in-progress
- 
Add redis password to hashi vault for omnibus gitlab chef based instances deployed to VMs 
glsh vault proxy
export VAULT_PROXY_ADDR="socks5://localhost:18200"
glsh vault loginvault kv get -format=json chef/env/gstg/shared/gitlab-omnibus-secrets | jq '.data.data' > data.json
cat data.json | jq --arg PASSWORD <RAILS_REDACTED> '."omnibus-gitlab".gitlab_rb."gitlab-rails".redis_yml_override.cluster_sessions.password = $PASSWORD' > data.json.tmp
diff -u data.json data.json.tmp
mv data.json.tmp data.json
vault kv patch chef/env/gstg/shared/gitlab-omnibus-secrets @data.json
rm data.json- 
Update chef role to use the cluster by merging MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/5481 
- 
Verify that new cluster is accessible from rails console 
ssh console-01-sv-gstg.c.gitlab-gstg.internal
[ gstg ] production> Gitlab::Redis::ClusterSessions.with{|c| c.ping}
=> "PONG"
[ gstg ] production>- 
Update hashivault with redis cluster password for rails, to be used by k8s deployments: 
vault kv put k8s/env/gstg/ns/gitlab/redis-cluster-sessions-rails password=<RAILS_REDACTED>- 
Update k8s workload to use redis cluster nodes by merging MR: gitlab-com/gl-infra/k8s-workloads/gitlab-com!4125 (merged) 
- 
Enable feature flag use_primary_and_secondary_stores_for_sessionsto start dual-write- 
/chatops run feature set use_primary_and_secondary_stores_for_sessions true --staging
 
- 
- 
Wait for 1 week (TTL of all sessions in Redis) to allow dual-writes to migrate all data. Resume only after 2025-02-04 06:02 
Migrate keys with redis_diff.rb script
- 
Set up environment in a console node using #15875 (comment 1440303927) as reference. 
- 
To setup the folders: 
# on local machine in the runbooks project
tar cvf migrate-script.tar renovate.json scripts/redis_diff.rb Gemfile scripts/redis_key_compare.rb
scp migrate-script.tar console-01-sv-gstg.c.gitlab-gstg.internal:/home/<username>
# in console node
tar xvf migrate-script.tar
bundle install # gem install if the node does not have bundleBoth redis-sessions.yml and redis-cluster-sessions.yml should be in the same level as the scripts folder.
redis-sessions.yml which should be symlinked as source.yml. We can use replica nodes since we are only writing to the destination. Source is read-only.
# in redis-sessions.yml
url: redis://:$REDIS_REDACTED@redis-sessions-01-db-gstg.c.gitlab-staging-1.internal:6379redis-cluster-sessions.yml which should be symlinked as destination.yml
# in redis-cluster-sessions.yml
nodes:
  - host: redis-cluster-sessions-shard-01-01-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-sessions-shard-01-02-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-sessions-shard-01-03-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-sessions-shard-02-01-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-sessions-shard-02-02-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-sessions-shard-02-03-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-sessions-shard-03-01-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-sessions-shard-03-02-db-gstg.c.gitlab-staging-1.internal
    port: 6379
  - host: redis-cluster-sessions-shard-03-03-db-gstg.c.gitlab-staging-1.internal
    port: 6379
password: REDIS_REDACTED
username: railsThe passwords can be found in
- 
ServiceRedisSessions: gitlab_rails['redis_sessions_instance']
- 
ServiceRedisClusterSessions: created in vault as part of https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/redis/provisioning-redis-cluster.md#2-configure-gitlab-rails. check for cluster_sessions
Symlink the files as follow
ln -s redis-sessions.yml source.yml 
ln -s redis-cluster-sessions.yml destination.yml- 
Enable feature flag use_primary_store_as_default_for_sessionsto switch reads to the Redis Cluster- 
/chatops run feature set use_primary_store_as_default_for_sessions true --staging
 
- 
- 
Test login in staging.gitlab.com 
- 
Disable feature flag use_primary_and_secondary_stores_for_sessionsto stop dual-write and end migration- /chatops run feature set use_primary_and_secondary_stores_for_sessions false --staging
 
- 
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 30 mins
- 
Revert MRs 
- 
https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/5481 
- 
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Apdex and Error SLIs
- Location: https://dashboards.gitlab.net/d/web-main/web3a-overview?from=now-6h%2Fm&orgId=1&timezone=utc&to=now%2Fm&var-PROMETHEUS_DS=mimir-gitlab-gstg&var-environment=gstg&var-stage=main
- What changes to this metric should prompt a rollback: Drop in Apdex or increase in error SLI
 
- Metric: Auth related metrics
- Location: https://dashboards.gitlab.net/d/JyaDfEWWz/user-authentication-events?orgId=1&from=now-6h&to=now&timezone=utc&var-env=gstg&var-environment=gstg&var-type=api&var-type=git&var-type=web&var-type=websockets&refresh=5m
- What changes to this metric should prompt a rollback: Any anomalies in one of the metric
 
Change Reviewer checklist
- 
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
 
- 
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
 
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
 
Change Technician checklist
- 
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncalland this issue and await their acknowledgement.)
- For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue. Mention @gitlab-org/saas-platforms/inframanagersin this issue to request approval and provide visibility to all infrastructure managers.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention @release-managersand this issue and await their acknowledgment.)
- There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.