A very large number of noop CleanupContainerRepositoryWorker runs for a single project

Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.

Summary

GitLab team members can read more in the ticket.

  • Customer observed constant large volumes of S3 requests from GitLab
  • Root cause was identified as CleanupContainerRepositoryWorker
    • 07:29 to 07:33: 1912 log entries in sidekiq for "class": "ContainerExpirationPolicies::CleanupContainerRepositoryWorker" with job_status: done.
    • All for one project.
    • Two container_repository_id
    • Original tag size ~800 for both
    • after_truncate_size 250
    • deleted_size always zero

Project image tag cleanup settings:

  • enabled
  • weekly
  • next scheduled run is 24 hours from now, and the logs we have are for yesterday, so this suggests the cleanup from last week is still running
  • keep 50 tags per image name
  • rule: remove tags older than 90 days, matching .*

Steps to reproduce

See: #362914 (comment 953530856)

Example Project

What is the current bug behavior?

  1. Vast number of ContainerExpirationPolicies::CleanupContainerRepositoryWorker instances being started.

What is the expected correct behavior?

Relevant logs and/or screenshots

Output of checks

Results of GitLab environment info

14.10.0

Possible fixes

Edited by 🤖 GitLab Bot 🤖