Loading
Commits on Source 11
-
This seems like a better name for the directory, as it more closely describes the purpose of its contents.
-
-
Split the "The MANIFEST.in and setup.py" section in two: "Managing data files" and "Updating BuildStream's Python dependencies". Briefly explain the layout of `requirements` directory and add instructions to use the Makefile added in the last commit.
-
As we now run tests using `tox`, we don't need to worry about manually packing and unpacking BuildStream. So, we can remove the preapre stage entirely. Update `coverage` and nightly jobs to appropriately cope with this change. Both these jobs now install all runtime dependencies from requirements files.
-
Fixes: e29aea36 ("Basic options for shell --build to use buildtrees")
-
Raoul Hidalgo Charman authored
Other components will start to reply on cas modules, and not the artifact cache modules so it should be organized to reflect this. All relevant imports have been changed. Part #802
-
Raoul Hidalgo Charman authored
Part of #802
-
Raoul Hidalgo Charman authored
List of methods moved * Initialization check: made it a class method that is run in a subprocess, for when checking in the main buildstream process. * fetch_blobs * send_blobs * verify_digest_on_remote * push_method Part of #802
-
Raoul Hidalgo Charman authored
This is a temporary directory within artifact cache. Part of #802
-
Raoul Hidalgo Charman authored
Seperates the pull logic into a remote/local API, so that artifact cache iterates over blob digests checks whether it has them, and then requests them if not. The request command allows batching of blobs where appropriate. Tests have been updated to ensure the correct tmpdir is set up in process wrappers, else invalid cross link errors happen in the CI. Additional asserts have been added to check that the temporary directories are cleared by the end of a pull. Part of #802
-
Raoul Hidalgo Charman authored
Similar to the pull methods, this implements a yield_directory_digests methods that iterates over blobs in the local CAS, with the upload_blob sending blobs to a remote and batching them where appropriate. Part of #802