[PRE] Configure application for using redis-cluster-sessions in pre
Pre Change
Change Summary
Configuration vaults and rails application in VMs and k8s to use newly provisioned redis-cluster-sessions in pre.
This is part of migration Gitlab::Redis::Sessions from Redis sentinel to Redis cluster.
Reference issue: gitlab-com/gl-infra/data-access/durability/team#35 (closed)
Change Details
- Services Impacted - ServiceRedisSessions / ServiceRedisClusterSessions
-
Change Technician -
@fshabir - Change Reviewer - @marcogreg
- Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2025-01-27 04:00
- Time tracking - 90 minutes
- Downtime Component - none
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 90 minutes
-
Set label changein-progress /label ~change::in-progress -
Add redis password to hashi vault for omnibus gitlab chef based instances deployed to VMs
glsh vault proxy
export VAULT_PROXY_ADDR="socks5://localhost:18200"
glsh vault login
vault kv get -format=json chef/env/pre/shared/gitlab-omnibus-secrets | jq '.data.data' > data.json
cat data.json | jq --arg PASSWORD <RAILS_REDACTED> '."omnibus-gitlab".gitlab_rb."gitlab-rails".redis_yml_override.cluster_sessions.password = $PASSWORD' > data.json.tmp
diff -u data.json data.json.tmp
mv data.json.tmp data.json
vault kv patch chef/env/pre/shared/gitlab-omnibus-secrets @data.json
rm data.json
-
Update chef role to use the cluster by merging MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/5470 -
Verify that new cluster is accessible from rails console
ssh console-01-sv-pre.c.gitlab-pre.internal
[ gstg ] production> Gitlab::Redis::ClusterSessions.with{|c| c.ping}
=> "PONG"
[ gstg ] production>
-
Update hashivault with redis cluster password for rails, to be used by k8s deployments:
vault kv put k8s/env/pre/ns/gitlab/redis-cluster-sessions password=<RAILS_REDACTED>
-
Update k8s workload to use redis cluster nodes by merging MR: gitlab-com/gl-infra/k8s-workloads/gitlab-com!4113 (closed) -
Enable feature flag use_primary_and_secondary_stores_for_sessionsto start dual-write-
/chatops run feature set use_primary_and_secondary_stores_for_db_load_balancing true --pre
-
Migrate keys with redis_diff.rb script
-
Set up environment in a console node using #15875 (comment 1440303927) as reference.
-
To setup the folders:
# on local machine in the runbooks project
tar cvf migrate-script.tar renovate.json scripts/redis_diff.rb Gemfile scripts/redis_key_compare.rb
scp migrate-script.tar console-01-sv-pre.c.gitlab-pre.internal:/home/<username>
# in console node
tar xvf migrate-script.tar
bundle install # gem install if the node does not have bundle
Both redis-sessions.yml and redis-cluster-sessions.yml should be in the same level as the scripts folder.
redis-sessions.yml which should be symlinked as source.yml. We can use replica nodes since we are only writing to the destination. Source is read-only.
# in redis-sessions.yml
url: redis://:$REDIS_REDACTED@redis-sessions-01-db-pre.c.gitlab-pre.internal:6379
redis-cluster-sessions.yml which should be symlinked as destination.yml
# in redis-cluster-sessions.yml
nodes:
- host: redis-cluster-sessions-shard-01-01-db-pre.c.gitlab-pre.internal
port: 6379
- host: redis-cluster-sessions-shard-01-02-db-pre.c.gitlab-pre.internal
port: 6379
- host: redis-cluster-sessions-shard-01-03-db-pre.c.gitlab-pre.internal
port: 6379
- host: redis-cluster-sessions-shard-02-01-db-pre.c.gitlab-pre.internal
port: 6379
- host: redis-cluster-sessions-shard-02-02-db-pre.c.gitlab-pre.internal
port: 6379
- host: redis-cluster-sessions-shard-02-03-db-pre.c.gitlab-pre.internal
port: 6379
- host: redis-cluster-sessions-shard-03-01-db-pre.c.gitlab-pre.internal
port: 6379
- host: redis-cluster-sessions-shard-03-02-db-pre.c.gitlab-pre.internal
port: 6379
- host: redis-cluster-sessions-shard-03-03-db-pre.c.gitlab-pre.internal
port: 6379
password: REDIS_REDACTED
username: rails
The passwords can be found in
-
ServiceRedisSessions:
gitlab_rails['redis_password'] -
ServiceRedisClusterSessions: created in vault as part of https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/redis/provisioning-redis-cluster.md#2-configure-gitlab-rails. check for
cluster_sessions
Symlink the files as follow
ln -s redis-sessions.yml source.yml
ln -s redis-cluster-sessions.yml destination.yml
-
Enable feature flag use_primary_store_as_default_for_sessionsto switch reads to the Redis Cluster-
/chatops run feature set use_primary_store_as_default_for_db_load_balancing true --pre
-
-
Disable feature flag use_primary_and_secondary_stores_for_sessionsto stop dual-write and end migration/chatops run feature set use_primary_and_secondary_stores_for_sessions false --pre
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 30 mins
-
Revert MRs -
https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/5406 -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Apdex and Error SLIs
- Location: https://dashboards.gitlab.net/d/redis-sessions-main/redis-sessions3a-overview?orgId=1&from=now-6h%2Fm&to=now%2Fm&timezone=utc&var-PROMETHEUS_DS=mimir-gitlab-pre&var-environment=pre
- What changes to this metric should prompt a rollback: Drop in Apdex or increase in error SLI
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue. Mention
@gitlab-org/saas-platforms/inframanagersin this issue to request approval and provide visibility to all infrastructure managers. - Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.