Fix flaky batching test in Ci::DeleteObjectsService
Problem
Related to gitlab-org/quality/engineering-productivity/master-broken-incidents#20030
I know that we use this workflow to stub limits, but I don't see what else could cause these failures.
The tests for Ci::DeleteObjectsService were using stub_const to override the BATCH_SIZE constant, which is unreliable in CI environments. This caused intermittent test failures:
Ci::DeleteObjectsService#execute with artifacts both ready and not ready for deletion limits the number of records removedCi::DeleteObjectsService#execute with artifacts both ready and not ready for deletion removes records in order
The issue is that stub_const can be cached or loaded differently in CI, causing the constant to not be properly overridden at execution time.
Solution
Instead of relying on constant stubbing, make batch_size an instance variable that can be set via the initializer:
- Add
attr_reader :batch_sizeto the service - Add an
initializemethod that acceptsbatch_sizeas a parameter with a default value ofBATCH_SIZE - Replace all references to the
BATCH_SIZEconstant with the instance variable@batch_size - Update tests to create a new service instance with the desired batch size instead of stubbing the constant
This approach is more reliable and makes the dependency explicit in the tests.
Changes
- app/services/ci/delete_objects_service.rb: Add initializer and use instance variable
- spec/services/ci/delete_objects_service_spec.rb: Update tests to use constructor injection instead of constant stubbing
Edited by Marius Bobin