Avoid setting an invalid TTL in duplicate job middleware
What does this MR do and why?
The LUA script in the Sidekiq duplicate job middleware attempts to update the deduplicated flag in a msgpack-serialized cookie. When it writes back the key, it also tries to maintain the existing TTL. However, if the TTL becomes 0, then Redis will raise an error: "@user_script:7: ERR invalid expire time in set".
When we lowered the deduplication TTL of
::Namespaces::ProcessSyncEventsWorker
from 6 hours to 1 minute in
!120580 (merged), this
exposed a bug in the LUA script because the tests repeatedly call this
middleware. To fix this, we don't bother writing back the key if the
TTL is not > 0.
Relates to #411325 (closed)
How to set up and validate locally
- In one window, watch the Redis logs via
redis-cli -s redis/redis.socket monitor | tee /tmp/redis.log
. - Stop Sidekiq to avoid polluting the logs:
gdk stop rails-background-jobs
- On
master
, runbundle exec rspec --fail-fast "spec/requests/api/ci/pipelines_spec.rb"
.
If you grep duplicate /tmp/redis.log | grep set
, you'll see the expiration time ticking down:
Once it hits an expiration of 0,
- Repeat step 3 with this branch, and you won't see the 0.
MR acceptance checklist
This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
-
I have evaluated the MR acceptance checklist for this MR.