Fix flaky batching test in Ci::DeleteObjectsService

Problem

Related to gitlab-org/quality/engineering-productivity/master-broken-incidents#20030

I know that we use this workflow to stub limits, but I don't see what else could cause these failures.

The tests for Ci::DeleteObjectsService were using stub_const to override the BATCH_SIZE constant, which is unreliable in CI environments. This caused intermittent test failures:

  • Ci::DeleteObjectsService#execute with artifacts both ready and not ready for deletion limits the number of records removed
  • Ci::DeleteObjectsService#execute with artifacts both ready and not ready for deletion removes records in order

The issue is that stub_const can be cached or loaded differently in CI, causing the constant to not be properly overridden at execution time.

Solution

Instead of relying on constant stubbing, make batch_size an instance variable that can be set via the initializer:

  1. Add attr_reader :batch_size to the service
  2. Add an initialize method that accepts batch_size as a parameter with a default value of BATCH_SIZE
  3. Replace all references to the BATCH_SIZE constant with the instance variable @batch_size
  4. Update tests to create a new service instance with the desired batch size instead of stubbing the constant

This approach is more reliable and makes the dependency explicit in the tests.

Changes

  • app/services/ci/delete_objects_service.rb: Add initializer and use instance variable
  • spec/services/ci/delete_objects_service_spec.rb: Update tests to use constructor injection instead of constant stubbing
Edited by Marius Bobin

Merge request reports

Loading