Cleanup problematic sessions in redis
Production Change
Change Summary
In #5546 (closed) we found problem sessions; while trying to resize Redis to survive this temporarily (https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5547) we found the resyncs do not complete due, presumably to the amount of data. Therefore we need to bring forward data cleanup, in the console (a C2 change). Because this will be a long running operation, we will mark it as a C3 once it has been initiated, so as not to prevent deployments.
The list of keys was generated by @stanhu and is exclusively of the form session:gitlab:2::HEX, being 82 chars long, which we validate in the loop
Change Details
- Services Impacted - ServiceRedis
- Change Technician - @cmiskell
- Change Reviewer - @devin
- Time tracking - 5hrs
- Downtime Component - None
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Estimated Time to Complete (mins) - 1m
-
Set label changein-progress on this issue
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 10hrs
In a tmux session on console-01, executing a rails console (sudo -u git /usr/bin/gitlab-rails console), run this code:
count = 0
Gitlab::Redis::SharedState.with do |r|
File.foreach('/tmp/2021-09-16-bad-redis-keys.txt').each do |line|
key = line.chomp
if (key.length != 82) or (!key.start_with?("session:gitlab:2::"))
puts "Not processing #{key}"
end
r.del(key)
count +=1
puts count if (count % 10000) == 0
sleep(0.00005) # no more than 2000/s
end
end
The rate is capped at 2K/s to match the highest normal rate of DELs we see on the persistent Redis. There are 37M session ids to delete which will take 5 hours + overhead to complete. This is a balance between speed and not slamming Redis too hard. We may look at this in early stages and decide to reduce the sleep/increase the rate, if performance of Redis is acceptable.
Post-Change Steps - steps to take to verify the change
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 0m
There is no rollback. In case the nodes become saturated, we will stop the running script and adjust the delete rate.
Monitoring
Key metrics to observe
Specifically the memory saturation; we expect this to reduce slowly over the run time.
Summary of infrastructure changes
-
Does this change introduce new compute instances? -
Does this change re-size any existing compute instances? -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc?
Summary of the above
Changes checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncalland this issue and await their acknowledgement.) -
Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managersand this issue and await their acknowledgment.) -
There are currently no active incidents.