Commits on Source 14
-
Tristan Van Berkom authored
-
Tristan Van Berkom authored
This is what the whole resource.py thing was created for, the cleanup job must have exclusive access to the cache, while the pull and build jobs which result in adding new artifacts, must only require shared access. This is a part of #623
-
Tristan Van Berkom authored
This means there is no cap for shared resource requests. Together with the previous commit, this causes the cleanup job and the pull/build jobs to all require unlimited shared access to the CACHE resource; while only the cleanup job requires exclusive access to the resource. This is a part of #623
-
Tristan Van Berkom authored
This runs after every pull, and does not need the cache exclusively, only the cleanup job requires the cache exclusively. Without this, every time a cache_size job is queued, all pull and build jobs need to complete before cache_size job can run exclusively, which is not good. This is a part of #623
-
Tristan Van Berkom authored
-
Tristan Van Berkom authored
The artifact cache provides the following public methods for external callers, but was hiding them away as if they are private. o ArtifactCache.add_artifact_size() o ArtifactCache.set_cache_size() Mark these properly public
-
Tristan Van Berkom authored
Here we have a very private looking _check_cache_size_real() function which no-one would ever want to call from outside of the _scheduler, especially given it's `_real()` prefix we should look for another outward facing API to use. However this is not private to the scheduler, and is intended to be called by the `Queue` implementations. o Renamed this to check_cache_size() o Moved it to the public API section of the Scheduler object o Added the missing API documenting comment o Also added the missing API documenting comment to the private `_run_cleanup()` callback which runs in response to completion of the cache size calculation job. o Also place the cleanup job logs into a cleanup subdirectory, for better symmetry with the cache_size jobs which now have their own subdirectory
-
Tristan Van Berkom authored
The ArtifactCache._local variable used to exist in order to use a special hack to allow absolute paths to a remote artifact cache, this was all for the purpose of testing. This has all gone away with the introduction of CAS, leaving behind a stale variable.
-
Tristan Van Berkom authored
Previously, the API contract was to expose the estimated_size variable on the ArtifactCache instance for all to see, however it is only relevant to the ArtifactCache abstract class code. Subclasses were informed to update the estimated_size variable in their calculate_cache_size() implementation. To untangle this and hide away the estimated size, this commit does the following: o Introduces ArtifactCache.compute_cache_size() API for external callers o ArtifactCache.compute_cache_size() calls the abstract method for the CasCache subclass to implement o ArtifactCache.compute_cache_size() updates the private estimated_size variable o All direct callers to ArtifactCache.calculate_cache_size(), have been updated to use the ArtifactCache.compute_cache_size() method instead, which takes care of updating anything local to the ArtifactCache abstract class code (the estimated_size)
-
Tristan Van Berkom authored
There is no justification to hold onto this state here. Instead, just make `Element._assemble()` return the size of the artifact it cached, and localize handling of that return value in the BuildQueue implementation where the value is observed.
-
Tristan Van Berkom authored
This was previously poking directly at the Platform._instance. Also, use the name 'artifacts' to hold the artifact cache to be consistent with other parts of the codebase, instead of calling it 'cache'.
-
Tristan Van Berkom authored
This was previously poking directly at the Platform._instance. Also, use the name 'artifacts' to hold the artifact cache to be consistent with other parts of the codebase, instead of calling it 'cache'.
-
Tristan Van Berkom authored
The artifact cache is anyway a singleton, the Element itself has an internal handle on the artifact cache because it is needed very frequently; but an artifact cache is not element specific and should not be looked up by surrounding code on a per element basis. Updated _scheduler/queues/queue.py, _scheduler/queues/buildqueue.py and _scheduler/jobs/elementjob.py to get the artifact cache directly from the Platform
-
Tristan Van Berkom authored
This does a lot of house cleaning, finally bringing cache cleanup logic to a level of comprehensibility. Changes in this commit include: o _artifactcache/artifactcache.py: _cache_size, _cache_quota and _cache_lower_threshold are now all private variables. get_approximate_cache_size() is now get_cache_size() Added get_quota_exceeded() for the purpose of safely checking if we have exceeded the quota. set_cache_size() now asserts that the passed size is not None, it is not acceptable to set a None size cache anymore. o _artifactcache/cascache.py: No longer set the ArtifactCache 'cache_size' variable violently in the commit() method. Also the calculate_cache_size() method now unconditionally calculates the cache size, that is what it's for. o _scheduler/jobs/cachesizejob.py & _scheduler/jobs/cleanupjob.py: Now check the success status. Don't try to set the cache size in the case that the job was terminated. o _scheduler/jobs/elementjob.py & _scheduler/queues/queue.py: No longer passing around the cache size from child tasks, this happens only explicitly, not implicitly for all tasks. o _scheduler/queues/buildqueue.py & _scheduler/scheduler.py: Use get_quota_exceeded() accessor This is a part of #623