Loading
Commits on Source 36
-
We want to check if some file is already cached here, not the parent directory.
-
* Rename it to _commit_directory() because… it is what it does; and also for symmetry with _fetch_directory(). * Rename digest to dir_digest to make it clear this is a digest for a directory. A following commit will also reuse the same variable name * Document method.
-
* Rename tree to dir_digest to make it clear this is a Digest object, and not a Tree object. * Add documentation
-
-
Jürg Billeter authored
This adds directory objects to the local repository before downloading files in the directory. However, artifact references are still stored only after downloading the complete directory and thus, there won't be dangling references. This will anyway be required for partial download support.
-
Jürg Billeter authored
gRPC can handle 1 MiB payloads. Increase size limit from 64 KiB to speed up uploads.`
-
Jürg Billeter authored
Use 1 MiB as payload size limit on the server side for both individual downloads and batch uploads.
-
Jürg Billeter authored
This uses BatchReadBlobs instead of individual blob download to speed up artifact pulling, if the server supports it. Fixes #554.
-
Jürg Billeter authored
_artifactcache/cascache.py: Use BatchReadBlobs See merge request !838
-
Tristan Van Berkom authored
This will take care of silencing the status messages while checking submodules.
-
Tristan Van Berkom authored
The source fetchers might be a list or a generator, when it is a generator (like the git source does), then we want to ensure that we silence the status messages which might occur as a result of consuming a source fetcher from the generator. This fixes the logs to be less verbose.
-
Tristan Van Berkom authored
fix status messages (1.2) See merge request !846
-
Refactor the push() and pull() implementations so that API additions needed for remote-execution is made easier. #454
-
Fixes #676.
-
This uses BatchUpdateBlobs instead of individual blob upload to speed up artifact pushing, if the server supports it. Fixes #677.
-
Tristan Van Berkom authored
CAS: Implement BatchUpdateBlobs support See merge request !844
-
Tristan Van Berkom authored
* Enhanced the base Job class to bookkeep which jobs have been terminated, and to consider them as `skipped` when asked via the `skipped` property. * Enhanced the base Queue class to bookkeep the job statuses more carefully. This fixes #479
-
Tristan Van Berkom authored
_scheduler: Fix bookkeeping of terminated jobs See merge request !851
-
Tristan Van Berkom authored
For some reason, we now receive a SIGINT from the main loop even when the SIGINT occurred with the handler disconnected in an interactive prompt. This patch simply ignores any received SIGINT events from the main loop in the case that we are already in the process of terminating. This fixes issue #693
-
Tristan Van Berkom authored
_scheduler/scheduler.py: Ignore interrupt events while terminating. See merge request !853
-
Tristan Van Berkom authored
Allow callers to decide where the temporary file will be created.
-
Tristan Van Berkom authored
Use the designated tempdir when creating refs, we expect that temporary files are not created in the storage directory ever, they should be only ever created in the designated temporary directory. This fixes race conditions where utils._get_dir_size() throws an unhandled exception when attempting to stat the file which inadvertantly disappears.
-
Tristan Van Berkom authored
This function assumes that files do not disappear while walking the given directory.
-
Tristan Van Berkom authored
Tristan/fix cache size race 1.2 See merge request !855
-
Valentin David authored
Python documentation is not clear on what shutil.rmtree is supposed to raise. However from the source code, it is obvious that OSError are raised, but never shutil.Error. It is not easy to test in normal condition because issues happen usually in combination with a FUSE filesystem, a probably a race condition where `fstatat` returns an error just before the filesystem being unmounted. Fixes #153.
-
Valentin David authored
Catch correct exception from shutil.rmtree [bst-1.2] See merge request !861
-
Tristan Van Berkom authored
-
Valentin David authored
-
Javier Jardón authored
Change URL to the Alpine tarball See merge request !881
-
Valentin David authored
The file is already a temporary file and does not need copy. ENOSPC is thrown during that copy in issue #609. Fixes #678.
-
Valentin David authored
There were some race conditions in the way we store artifacts in the server. Now we verify we still have space on disk for every write. We also handle ENOSPC to reallocate space or if we cannot properly fail the connection. This should help for #609.
-
Valentin David authored
Issue was found while investigating for #609. When the disk gets filled up, we would prune non-referenced objects. Since this happens during an upload of object before a reference is added for uploaded objects, then we would have an incomplete upload. Further pulls would result in a missing object error. The solution is to let 6 hours pass before we can prune an object. To implement this fix without this minimum age "trick", we would need to change protocol and deal with concurrent pushes.
-
Valentin David authored
-
Valentin David authored
-
Valentin David authored
-
Valentin David authored
On fd.o SDK infrastructure, there is an issue where the cache server gets stuck. This allows to print out the current stack on all threads. This is only for the cache server.