Job artifacts size in namespace statistics may not be updated when project is deleted
Problem
With the job_artifacts_size
being used as efficient counter we may have a problem when deleting a project. This is the current flow:
- project is deleted
- artifacts are deleted
- Efficient counter logic:
-
job_artifacts_size
value in Redis is reduced by the size of deleted records - a worker is scheduled async to flush the Redis value to the database
- By the time the
FlushCounterIncrementsWorker
runs, theproject_statistics
record may have been deleted byproject.destroy!
- The worker exits early
-
- namespace statistics are not being updated
💥
Proposal
Calling Namespaces::ScheduleAggregationWorker
after a project is destroyed in Projects::DestroyService
should solve the problem.
Example: A project has 10GB artifacts. We are destroying the project.
-
project_statistics.builds_artifacts_size: 10GB
. - We remove artifacts in bulk and decrement the counter in Redis:
-10GB
. -
FlushCounterIncrementsWorker
is scheduled to flush the value from Redis to the database (decrementing the column by -10GB). - In the meantime
Projects::DestroyService
runsproject.destroy!
. - When
FlushCounterIncrementsWorker
runs,project_statistics
record has been destroyed already and the worker fails. - As we are removing the whole
project_statistics
record we don't really care about decrementing it correctly at this point. If we scheduleNamespaces::ScheduleAggregationWorker
to update the namespace statistics based on the remainingproject_statistics
records we should be fine. - The Redis entry containing
-10GB
eventually expires.
Edited by Fabio Pitino