Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 108-integration-tests-not-idempotent-and-self-contained
  • 131-behavior-of-except-argument-is-frustrating-and-confusing
  • 132-loading-external-plugins-works-without-explicit-requirement-in-project-conf
  • 135-expire-artifacts-in-local-cache
  • 135-expire-artifacts-in-local-cache-clean
  • 138-aborting-bst-push-command-causes-stack-trace-3
  • 142-potentially-printing-provenance-more-than-once-in-loaderrors
  • 188-trigger-external-commands-on-certain-events
  • 214-filter-workspacing-rework
  • 218-allow-specifying-the-chroot-binary-to-use-for-sandboxes-on-unix-platforms
  • 239-use-pylint-for-linting
  • 372-allow-queues-to-run-auxilliary-jobs-after-an-element-s-job-finishes
  • 380-untagged-bst
  • 463-make-dependency-type-default-to-build
  • 537-mirror-fallback-does-not-work-for-git
  • 64-clarify-about-plugins-importing-other-plugins
  • 716-add-example-with-build-directory-outside-of-source-directory
  • 716-add-example-with-build-directory-outside-of-source-directory-2
  • 81-non-empty-read-only-directories-not-handled-during-bst-build-and-others
  • BenjaminSchubert/fix-quota-tests
  • Qinusty/235-manifest
  • Qinusty/397
  • Qinusty/470-bst-track-yaml-indent
  • Qinusty/553-backport-1.2
  • Qinusty/663-missing-cache-key-workspace-open
  • Qinusty/backport-576
  • Qinusty/backport-skipped-562
  • Qinusty/gitlab-ci
  • Qinusty/gitlab-ci-duration
  • Qinusty/message-helpers
  • Qinusty/pytest_cache_gitignore
  • abderrahim/cached-failure
  • abderrahim/cachekey-strictrebuild
  • abderrahim/cleanup-speedup
  • abderrahim/makemaker
  • abderrahim/resolve-remotes
  • abderrahim/source-cache
  • abderrahim/stage-artifact-scriptelement
  • abderrahim/virtual-extract
  • adamjones/contributing
  • adamjones/contribution-guide
  • aevri/assert_no_unexpected_size_writes
  • aevri/casdprocessmanager2
  • aevri/check_spawn_ci_working
  • aevri/enable_spawn_ci_4
  • aevri/enable_spawn_ci_6
  • aevri/enable_spawn_ci_7
  • aevri/json_artifact_meta
  • aevri/picklable_jobs
  • aevri/plugin_venvs
  • aevri/provenance_scope
  • aevri/pylint_ignore_argsdiff
  • aevri/safe_noninteractive
  • aevri/win32
  • aevri/win32_minimal
  • aevri/win32_minimal_seemstowork_20190829
  • aevri/win32_receive_signals
  • aevri/win32_temptext
  • alexfazakas/add-bst-init-argument
  • alexfazakas/use-merge-trains
  • always-do-linting
  • another-segfault
  • becky/locally_downloaded_files
  • becky/shell_launch_errors
  • bschubert/add-isolated-tests
  • bschubert/isort
  • bschubert/merge-parent-child-job
  • bschubert/more-mypy
  • bschubert/no-multiprocessing-bak
  • bschubert/no-multiprocessing-full
  • bschubert/optimize-deps
  • bschubert/optimize-element-init
  • bschubert/optimize-loader-sorting
  • bschubert/optimize-mapping-node
  • bschubert/optimize-splits
  • bschubert/remove-multiline-switch-for-re
  • bschubert/remove-parent-child-pipe
  • bschubert/remove-pip-source
  • bschubert/standardize-source-tests
  • bschubert/test-plugins
  • bschubert/update-coverage
  • bst-1
  • bst-1.0
  • bst-1.2
  • bst-1.4
  • bst-pull
  • bst-push
  • buildbox-pre-will
  • cache-key-v0
  • caching_build_trees
  • cascache_timeouts
  • chandan/automate-pypi-release
  • chandan/cli-deps
  • chandan/contrib-dependencies
  • chandan/element-cache
  • chandan/enums
  • chandan/extras-require
  • chandan/macos-multiprocessing
  • chandan/moar-parallelism
  • chandan/moar-runners
  • 1.0.0
  • 1.0.1
  • 1.1.0
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.1.5
  • 1.1.6
  • 1.1.7
  • 1.2.0
  • 1.2.1
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.7
  • 1.2.8
  • 1.3.0
  • 1.3.1
  • 1.4.0
  • 1.4.1
  • 1.4.2
  • 1.4.3
  • 1.5.0
  • 1.5.1
  • 1.6.0
  • 1.6.1
  • 1.91.0
  • 1.91.1
  • 1.91.2
  • 1.91.3
  • 1.93.0
  • 1.93.1
  • 1.93.2
  • 1.93.3
  • 1.93.4
  • 1.93.5
  • CROSS_PLATFORM_SEPT_2017
  • PRE_CAS_MERGE_JULY_2018
  • bst-1-branchpoint
  • bst-1.2-branchpoint
  • bst-1.4-branchpoint
144 results

Target

Select target project
  • willsalmon/buildstream
  • CumHoleZH/buildstream
  • tchaik/buildstream
  • DCotyPortfolio/buildstream
  • jesusoctavioas/buildstream
  • patrickmmartin/buildstream
  • franred/buildstream
  • tintou/buildstream
  • alatiera/buildstream
  • martinblanchard/buildstream
  • neverdie22042524/buildstream
  • Mattlk13/buildstream
  • PServers/buildstream
  • phamnghia610909/buildstream
  • chiaratolentino/buildstream
  • eysz7-x-x/buildstream
  • kerrick1/buildstream
  • matthew-yates/buildstream
  • twofeathers/buildstream
  • mhadjimichael/buildstream
  • pointswaves/buildstream
  • Mr.JackWilson/buildstream
  • Tw3akG33k/buildstream
  • AlexFazakas/buildstream
  • eruidfkiy/buildstream
  • clamotion2/buildstream
  • nanonyme/buildstream
  • wickyjaaa/buildstream
  • nmanchev/buildstream
  • bojorquez.ja/buildstream
  • mostynb/buildstream
  • highpit74/buildstream
  • Demo112/buildstream
  • ba2014sheer/buildstream
  • tonimadrino/buildstream
  • usuario2o/buildstream
  • Angelika123456/buildstream
  • neo355/buildstream
  • corentin-ferlay/buildstream
  • coldtom/buildstream
  • wifitvbox81/buildstream
  • 358253885/buildstream
  • seanborg/buildstream
  • SotK/buildstream
  • DouglasWinship/buildstream
  • karansthr97/buildstream
  • louib/buildstream
  • bwh-ct/buildstream
  • robjh/buildstream
  • we88c0de/buildstream
  • zhengxian5555/buildstream
51 results
Select Git revision
  • 108-integration-tests-not-idempotent-and-self-contained
  • 131-behavior-of-except-argument-is-frustrating-and-confusing
  • 132-loading-external-plugins-works-without-explicit-requirement-in-project-conf
  • 135-expire-artifacts-in-local-cache
  • 135-expire-artifacts-in-local-cache-clean
  • 138-aborting-bst-push-command-causes-stack-trace-3
  • 142-potentially-printing-provenance-more-than-once-in-loaderrors
  • 188-trigger-external-commands-on-certain-events
  • 214-filter-workspacing-rework
  • 218-allow-specifying-the-chroot-binary-to-use-for-sandboxes-on-unix-platforms
  • 239-use-pylint-for-linting
  • 372-allow-queues-to-run-auxilliary-jobs-after-an-element-s-job-finishes
  • 380-untagged-bst
  • 463-make-dependency-type-default-to-build
  • 537-mirror-fallback-does-not-work-for-git
  • 64-clarify-about-plugins-importing-other-plugins
  • 716-add-example-with-build-directory-outside-of-source-directory
  • 716-add-example-with-build-directory-outside-of-source-directory-2
  • 81-non-empty-read-only-directories-not-handled-during-bst-build-and-others
  • BenjaminSchubert/fix-quota-tests
  • Qinusty/235-manifest
  • Qinusty/397
  • Qinusty/470-bst-track-yaml-indent
  • Qinusty/553-backport-1.2
  • Qinusty/663-missing-cache-key-workspace-open
  • Qinusty/backport-576
  • Qinusty/backport-skipped-562
  • Qinusty/gitlab-ci
  • Qinusty/gitlab-ci-duration
  • Qinusty/message-helpers
  • Qinusty/pytest_cache_gitignore
  • abderrahim/cached-failure
  • abderrahim/cachekey-strictrebuild
  • abderrahim/cleanup-speedup
  • abderrahim/makemaker
  • abderrahim/resolve-remotes
  • abderrahim/source-cache
  • abderrahim/stage-artifact-scriptelement
  • abderrahim/virtual-extract
  • adamjones/contributing
  • adamjones/contribution-guide
  • aevri/assert_no_unexpected_size_writes
  • aevri/casdprocessmanager2
  • aevri/check_spawn_ci_working
  • aevri/enable_spawn_ci_4
  • aevri/enable_spawn_ci_6
  • aevri/enable_spawn_ci_7
  • aevri/json_artifact_meta
  • aevri/picklable_jobs
  • aevri/plugin_venvs
  • aevri/provenance_scope
  • aevri/pylint_ignore_argsdiff
  • aevri/safe_noninteractive
  • aevri/win32
  • aevri/win32_minimal
  • aevri/win32_minimal_seemstowork_20190829
  • aevri/win32_receive_signals
  • aevri/win32_temptext
  • alexfazakas/add-bst-init-argument
  • alexfazakas/use-merge-trains
  • always-do-linting
  • another-segfault
  • becky/locally_downloaded_files
  • becky/shell_launch_errors
  • bschubert/add-isolated-tests
  • bschubert/isort
  • bschubert/merge-parent-child-job
  • bschubert/more-mypy
  • bschubert/no-multiprocessing-bak
  • bschubert/no-multiprocessing-full
  • bschubert/optimize-deps
  • bschubert/optimize-element-init
  • bschubert/optimize-loader-sorting
  • bschubert/optimize-mapping-node
  • bschubert/optimize-splits
  • bschubert/remove-multiline-switch-for-re
  • bschubert/remove-parent-child-pipe
  • bschubert/remove-pip-source
  • bschubert/standardize-source-tests
  • bschubert/test-plugins
  • bschubert/update-coverage
  • bst-1
  • bst-1.0
  • bst-1.2
  • bst-1.4
  • bst-pull
  • bst-push
  • buildbox-pre-will
  • cache-key-v0
  • caching_build_trees
  • cascache_timeouts
  • chandan/automate-pypi-release
  • chandan/cli-deps
  • chandan/contrib-dependencies
  • chandan/element-cache
  • chandan/enums
  • chandan/extras-require
  • chandan/macos-multiprocessing
  • chandan/moar-parallelism
  • chandan/moar-runners
  • 1.0.0
  • 1.0.1
  • 1.1.0
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.1.5
  • 1.1.6
  • 1.1.7
  • 1.2.0
  • 1.2.1
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.7
  • 1.2.8
  • 1.3.0
  • 1.3.1
  • 1.4.0
  • 1.4.1
  • 1.4.2
  • 1.4.3
  • 1.5.0
  • 1.5.1
  • 1.6.0
  • 1.6.1
  • 1.91.0
  • 1.91.1
  • 1.91.2
  • 1.91.3
  • 1.93.0
  • 1.93.1
  • 1.93.2
  • 1.93.3
  • 1.93.4
  • 1.93.5
  • CROSS_PLATFORM_SEPT_2017
  • PRE_CAS_MERGE_JULY_2018
  • bst-1-branchpoint
  • bst-1.2-branchpoint
  • bst-1.4-branchpoint
144 results
Show changes
Commits on Source (723)
Showing
with 3415 additions and 736 deletions
......@@ -4,11 +4,15 @@ include =
*/buildstream/*
omit =
# Omit profiling helper module
# Omit some internals
*/buildstream/_profile.py
*/buildstream/__main__.py
*/buildstream/_version.py
# Omit generated code
*/buildstream/_protos/*
*/.eggs/*
# Omit .tox directory
*/.tox/*
[report]
show_missing = True
......
......@@ -13,10 +13,12 @@ tests/**/*.pyc
integration-cache/
tmp
.coverage
.coverage-reports/
.coverage.*
.cache
.pytest_cache/
*.bst/
.tox/
# Pycache, in case buildstream is ran directly from within the source
# tree
......@@ -34,3 +36,4 @@ doc/source/modules.rst
doc/source/buildstream.rst
doc/source/buildstream.*.rst
doc/build/
versioneer.pyc
image: buildstream/testsuite-debian:9-master-114-4cab18e3
image: buildstream/testsuite-debian:9-5da27168-32c47d1c
cache:
key: "$CI_JOB_NAME-"
......@@ -6,44 +6,14 @@ cache:
- cache/
stages:
- prepare
- test
- post
#####################################################
# Prepare stage #
#####################################################
# Create a source distribution
#
source_dist:
stage: prepare
script:
# Generate the source distribution tarball
#
- python3 setup.py sdist
- tar -ztf dist/*
- tarball=$(cd dist && echo $(ls *))
# Verify that the source distribution tarball can be installed correctly
#
- pip3 install dist/*.tar.gz
- bst --version
# unpack tarball as `dist/buildstream` directory
- |
cat > dist/unpack.sh << EOF
#!/bin/sh
tar -zxf ${tarball}
mv ${tarball%.tar.gz} buildstream
EOF
# Make our helpers executable
- chmod +x dist/unpack.sh
artifacts:
paths:
- dist/
variables:
PYTEST_ADDOPTS: "--color=yes"
INTEGRATION_CACHE: "${CI_PROJECT_DIR}/cache/integration-cache"
TEST_COMMAND: "tox -- --color=yes --integration"
COVERAGE_PREFIX: "${CI_JOB_NAME}."
#####################################################
......@@ -52,107 +22,150 @@ source_dist:
# Run premerge commits
#
.linux-tests-template: &linux-tests
.tests-template: &tests
stage: test
variables:
PYTEST_ADDOPTS: "--color=yes"
script:
before_script:
# Diagnostics
- mount
- df -h
script:
- mkdir -p "${INTEGRATION_CACHE}"
- useradd -Um buildstream
- chown -R buildstream:buildstream .
- export INTEGRATION_CACHE="$(pwd)/cache/integration-cache"
# Unpack and get into dist/buildstream
- cd dist && ./unpack.sh
- chown -R buildstream:buildstream buildstream
- cd buildstream
# Run the tests as a simple user to test for permission issues
- su buildstream -c "${TEST_COMMAND}"
# Run the tests from the source distribution, We run as a simple
# user to test for permission issues
- su buildstream -c 'python3 setup.py test --index-url invalid://uri --addopts --integration'
# Go back to the toplevel and collect our reports
- cd ../..
- mkdir -p coverage-linux/
- cp dist/buildstream/.coverage.* coverage-linux/coverage."${CI_JOB_NAME}"
after_script:
except:
- schedules
artifacts:
paths:
- coverage-linux/
- .coverage-reports
tests-debian-9:
image: buildstream/testsuite-debian:9-master-117-aa3a33b3
<<: *linux-tests
image: buildstream/testsuite-debian:9-5da27168-32c47d1c
<<: *tests
tests-fedora-27:
image: buildstream/testsuite-fedora:27-master-117-aa3a33b3
<<: *linux-tests
image: buildstream/testsuite-fedora:27-5da27168-32c47d1c
<<: *tests
tests-fedora-28:
image: buildstream/testsuite-fedora:28-master-117-aa3a33b3
<<: *linux-tests
image: buildstream/testsuite-fedora:28-5da27168-32c47d1c
<<: *tests
tests-ubuntu-18.04:
image: buildstream/testsuite-ubuntu:18.04-master-117-aa3a33b3
<<: *linux-tests
image: buildstream/testsuite-ubuntu:18.04-5da27168-32c47d1c
<<: *tests
tests-python-3.7-stretch:
image: buildstream/testsuite-python:3.7-stretch-a60f0c39
<<: *tests
variables:
# Note that we explicitly specify TOXENV in this case because this
# image has both 3.6 and 3.7 versions. python3.6 cannot be removed because
# some of our base dependencies declare it as their runtime dependency.
TOXENV: py37
tests-centos-7.6:
<<: *tests
image: buildstream/testsuite-centos:7.6-5da27168-32c47d1c
overnight-fedora-28-aarch64:
image: buildstream/testsuite-fedora:aarch64-28-5da27168-32c47d1c
tags:
- aarch64
<<: *tests
# We need to override the exclusion from the template
# in order to run on schedules
except: []
only:
- schedules
before_script:
# grpcio needs to be compiled from source on aarch64 so we additionally
# need a C++ compiler here.
# FIXME: Ideally this would be provided by the base image. This will be
# unblocked by https://gitlab.com/BuildStream/buildstream-docker-images/issues/34
- dnf install -y gcc-c++
tests-unix:
# Use fedora here, to a) run a test on fedora and b) ensure that we
# can get rid of ostree - this is not possible with debian-8
image: buildstream/testsuite-fedora:27-master-117-aa3a33b3
stage: test
image: buildstream/testsuite-fedora:27-5da27168-32c47d1c
<<: *tests
variables:
BST_FORCE_BACKEND: "unix"
PYTEST_ADDOPTS: "--color=yes"
script:
- export INTEGRATION_CACHE="$(pwd)/cache/integration-cache"
# We remove the Bubblewrap and OSTree packages here so that we catch any
# codepaths that try to use them. Removing OSTree causes fuse-libs to
# disappear unless we mark it as user-installed.
- dnf mark install fuse-libs
- dnf erase -y bubblewrap ostree
# Since the unix platform is required to run as root, no user change required
- ${TEST_COMMAND}
tests-fedora-missing-deps:
# Ensure that tests behave nicely while missing bwrap and ostree
image: buildstream/testsuite-fedora:28-5da27168-32c47d1c
<<: *tests
script:
# We remove the Bubblewrap and OSTree packages here so that we catch any
# codepaths that try to use them. Removing OSTree causes fuse-libs to
# disappear unless we mark it as user-installed.
- dnf mark install fuse-libs
- dnf erase -y bubblewrap ostree
# Unpack and get into dist/buildstream
- cd dist && ./unpack.sh && cd buildstream
- useradd -Um buildstream
- chown -R buildstream:buildstream .
# Since the unix platform is required to run as root, no user change required
- python3 setup.py test --index-url invalid://uri --addopts --integration
- ${TEST_COMMAND}
tests-fedora-update-deps:
# Check if the tests pass after updating requirements to their latest
# allowed version.
allow_failure: true
image: buildstream/testsuite-fedora:28-5da27168-32c47d1c
<<: *tests
script:
- useradd -Um buildstream
- chown -R buildstream:buildstream .
- make --always-make --directory requirements
- cat requirements/*.txt
- su buildstream -c "${TEST_COMMAND}"
# Go back to the toplevel and collect our reports
- cd ../..
- mkdir -p coverage-unix/
- cp dist/buildstream/.coverage.* coverage-unix/coverage.unix
# Lint separately from testing
lint:
stage: test
before_script:
# Diagnostics
- python3 --version
script:
- tox -e lint
except:
- schedules
artifacts:
paths:
- coverage-unix/
- logs-unix/
# Automatically build documentation for every commit, we want to know
# if building documentation fails even if we're not deploying it.
# Note: We still do not enforce a consistent installation of python3-sphinx,
# as it will significantly grow the backing image.
docs:
stage: test
variables:
BST_FORCE_SESSION_REBUILD: 1
script:
- export BST_SOURCE_CACHE="$(pwd)/cache/integration-cache/sources"
# Currently sphinx_rtd_theme does not support Sphinx >1.8, this breaks search functionality
- pip3 install sphinx==1.7.9
- pip3 install sphinx-click
- pip3 install sphinx_rtd_theme
- cd dist && ./unpack.sh && cd buildstream
- make BST_FORCE_SESSION_REBUILD=1 -C doc
- cd ../..
- mv dist/buildstream/doc/build/html public
- env BST_SOURCE_CACHE="$(pwd)/cache/integration-cache/sources" tox -e docs
- mv doc/build/html public
except:
- schedules
artifacts:
......@@ -163,13 +176,23 @@ docs:
stage: test
variables:
BST_EXT_URL: git+https://gitlab.com/BuildStream/bst-external.git
BST_EXT_REF: 1d6ab71151b93c8cbc0a91a36ffe9270f3b835f1 # 0.5.1
FD_SDK_REF: 88d7c22c2281b987faa02edd57df80d430eecf1f # 18.08.11-35-g88d7c22c
BST_EXT_REF: 0.9.0-0-g63a19e8068bd777bd9cd59b1a9442f9749ea5a85
FD_SDK_REF: freedesktop-sdk-18.08.25-0-g250939d465d6dd7768a215f1fa59c4a3412fc337
before_script:
- (cd dist && ./unpack.sh && cd buildstream && pip3 install .)
- |
mkdir -p "${HOME}/.config"
cat <<EOF >"${HOME}/.config/buildstream.conf"
scheduler:
fetchers: 2
EOF
- pip3 install -r requirements/requirements.txt -r requirements/plugin-requirements.txt
- pip3 install --no-index .
- pip3 install --user -e ${BST_EXT_URL}@${BST_EXT_REF}#egg=bst_ext
- git clone https://gitlab.com/freedesktop-sdk/freedesktop-sdk.git
- git -C freedesktop-sdk checkout ${FD_SDK_REF}
artifacts:
paths:
- "${HOME}/.cache/buildstream/logs"
only:
- schedules
......@@ -249,30 +272,28 @@ coverage:
stage: post
coverage: '/TOTAL +\d+ +\d+ +(\d+\.\d+)%/'
script:
- cd dist && ./unpack.sh && cd buildstream
- pip3 install --no-index .
- mkdir report
- cd report
- cp ../../../coverage-unix/coverage.unix .
- cp ../../../coverage-linux/coverage.* .
- ls coverage.*
- coverage combine --rcfile=../.coveragerc -a coverage.*
- coverage report --rcfile=../.coveragerc -m
- cp -a .coverage-reports/ ./coverage-sources
- tox -e coverage
- cp -a .coverage-reports/ ./coverage-report
dependencies:
- tests-debian-9
- tests-fedora-27
- tests-fedora-28
- tests-fedora-missing-deps
- tests-ubuntu-18.04
- tests-unix
- source_dist
except:
- schedules
artifacts:
paths:
- coverage-sources/
- coverage-report/
# Deploy, only for merges which land on master branch.
#
pages:
stage: post
dependencies:
- source_dist
- docs
variables:
ACME_DIR: public/.well-known/acme-challenge
......
......@@ -97,7 +97,13 @@ a new merge request. You can also `create a merge request for an existing branch
You may open merge requests for the branches you create before you are ready
to have them reviewed and considered for inclusion if you like. Until your merge
request is ready for review, the merge request title must be prefixed with the
``WIP:`` identifier.
``WIP:`` identifier. GitLab `treats this specially
<https://docs.gitlab.com/ee/user/project/merge_requests/work_in_progress_merge_requests.html>`_,
which helps reviewers.
Consider marking a merge request as WIP again if you are taking a while to
address a review point. This signals that the next action is on you, and it
won't appear in a reviewer's search for non-WIP merge requests to review.
Organized commits
......@@ -122,6 +128,12 @@ If a commit in your branch modifies behavior such that a test must also
be changed to match the new behavior, then the tests should be updated
with the same commit, so that every commit passes its own tests.
These principles apply whenever a branch is non-WIP. So for example, don't push
'fixup!' commits when addressing review comments, instead amend the commits
directly before pushing. GitLab has `good support
<https://docs.gitlab.com/ee/user/project/merge_requests/versions.html>`_ for
diffing between pushes, so 'fixup!' commits are not necessary for reviewers.
Commit messages
~~~~~~~~~~~~~~~
......@@ -144,6 +156,16 @@ number must be referenced in the commit message.
Fixes #123
Note that the 'why' of a change is as important as the 'what'.
When reviewing this, folks can suggest better alternatives when they know the
'why'. Perhaps there are other ways to avoid an error when things are not
frobnicated.
When folks modify this code, there may be uncertainty around whether the foos
should always be frobnicated. The comments, the commit message, and issue #123
should shed some light on that.
In the case that you have a commit which necessarily modifies multiple
components, then the summary line should still mention generally what
changed (if possible), followed by a colon and a brief summary.
......@@ -531,7 +553,7 @@ One problem which arises from this is that we end up having symbols
which are *public* according to the :ref:`rules discussed in the previous section
<contributing_public_and_private>`, but must be hidden away from the
*"Public API Surface"*. For example, BuildStream internal classes need
to invoke methods on the ``Element`` and ``Source`` classes, wheras these
to invoke methods on the ``Element`` and ``Source`` classes, whereas these
methods need to be hidden from the *"Public API Surface"*.
This is where BuildStream deviates from the PEP-8 standard for public
......@@ -609,7 +631,7 @@ An element plugin will derive from Element by importing::
from buildstream import Element
When importing utilities specifically, dont import function names
When importing utilities specifically, don't import function names
from there, instead import the module itself::
from . import utils
......@@ -715,7 +737,7 @@ Abstract methods
~~~~~~~~~~~~~~~~
In BuildStream, an *"Abstract Method"* is a bit of a misnomer and does
not match up to how Python defines abstract methods, we need to seek out
a new nomanclature to refer to these methods.
a new nomenclature to refer to these methods.
In Python, an *"Abstract Method"* is a method which **must** be
implemented by a subclass, whereas all methods in Python can be
......@@ -938,7 +960,7 @@ possible, and avoid any cyclic relationships in modules.
For instance, the ``Source`` objects are owned by ``Element``
objects in the BuildStream data model, and as such the ``Element``
will delegate some activities to the ``Source`` objects in its
possesion. The ``Source`` objects should however never call functions
possession. The ``Source`` objects should however never call functions
on the ``Element`` object, nor should the ``Source`` object itself
have any understanding of what an ``Element`` is.
......@@ -1200,27 +1222,13 @@ For further information about using the reStructuredText with sphinx, please see
Building Docs
~~~~~~~~~~~~~
The documentation build is not integrated into the ``setup.py`` and is
difficult (or impossible) to do so, so there is a little bit of setup
you need to take care of first.
Before you can build the BuildStream documentation yourself, you need
to first install ``sphinx`` along with some additional plugins and dependencies,
using pip or some other mechanism::
# Install sphinx
pip3 install --user sphinx
# Install some sphinx extensions
pip3 install --user sphinx-click
pip3 install --user sphinx_rtd_theme
# Additional optional dependencies required
pip3 install --user arpy
Before you can build the docs, you will end to ensure that you have installed
the required :ref:`build dependencies <contributing_build_deps>` as mentioned
in the testing section above.
To build the documentation, just run the following::
make -C doc
tox -e docs
This will give you a ``doc/build/html`` directory with the html docs which
you can view in your browser locally to test.
......@@ -1238,9 +1246,10 @@ will make the docs build reuse already downloaded sources::
export BST_SOURCE_CACHE=~/.cache/buildstream/sources
To force rebuild session html while building the doc, simply build the docs like this::
To force rebuild session html while building the doc, simply run `tox` with the
``BST_FORCE_SESSION_REBUILD`` environment variable set, like so::
make BST_FORCE_SESSION_REBUILD=1 -C doc
env BST_FORCE_SESSION_REBUILD=1 tox -e docs
Man pages
......@@ -1250,14 +1259,9 @@ into the ``setup.py``, as such, whenever the frontend command line
interface changes, the static man pages should be regenerated and
committed with that.
To do this, first ensure you have ``click_man`` installed, possibly
with::
To do this, run the following from the the toplevel directory of BuildStream::
pip3 install --user click_man
Then, in the toplevel directory of buildstream, run the following::
python3 setup.py --command-packages=click_man.commands man_pages
tox -e man
And commit the result, ensuring that you have added anything in
the ``man/`` subdirectory, which will be automatically included
......@@ -1356,7 +1360,7 @@ Structure of an example
'''''''''''''''''''''''
The :ref:`tutorial <tutorial>` and the :ref:`examples <examples>` sections
of the documentation contain a series of sample projects, each chapter in
the tutoral, or standalone example uses a sample project.
the tutorial, or standalone example uses a sample project.
Here is the the structure for adding new examples and tutorial chapters.
......@@ -1446,63 +1450,159 @@ regenerate them locally in order to build the docs.
Testing
-------
BuildStream uses pytest for regression tests and testing out
the behavior of newly added components.
BuildStream uses `tox <https://tox.readthedocs.org/>`_ as a frontend to run the
tests which are implemented using `pytest <https://pytest.org/>`_. We use
pytest for regression tests and testing out the behavior of newly added
components.
The elaborate documentation for pytest can be found here: http://doc.pytest.org/en/latest/contents.html
Don't get lost in the docs if you don't need to, follow existing examples instead.
.. _contributing_build_deps:
Installing build dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some of BuildStream's dependencies have non-python build dependencies. When
running tests with ``tox``, you will first need to install these dependencies.
Exact steps to install these will depend on your operating system. Commands
for installing them for some common distributions are listed below.
For Fedora-based systems::
dnf install gcc pkg-config python3-devel cairo-gobject-devel glib2-devel gobject-introspection-devel
For Debian-based systems::
apt install gcc pkg-config python3-dev libcairo2-dev libgirepository1.0-dev
Running tests
~~~~~~~~~~~~~
To run the tests, just type::
To run the tests, simply navigate to the toplevel directory of your BuildStream
checkout and run::
./setup.py test
tox
By default, the test suite will be run against every supported python version
found on your host. If you have multiple python versions installed, you may
want to run tests against only one version and you can do that using the ``-e``
option when running tox::
tox -e py37
If you would like to test and lint at the same time, or if you do have multiple
python versions installed and would like to test against multiple versions, then
we recommend using `detox <https://github.com/tox-dev/detox>`_, just run it with
the same arguments you would give `tox`::
detox -e lint,py36,py37
At the toplevel.
Linting is performed separately from testing. In order to run the linting step which
consists of running the ``pycodestyle`` and ``pylint`` tools, run the following::
When debugging a test, it can be desirable to see the stdout
and stderr generated by a test, to do this use the ``--addopts``
function to feed arguments to pytest as such::
tox -e lint
./setup.py test --addopts -s
.. tip::
The project specific pylint and pycodestyle configurations are stored in the
toplevel buildstream directory in the ``.pylintrc`` file and ``setup.cfg`` files
respectively. These configurations can be interesting to use with IDEs and
other developer tooling.
The output of all failing tests will always be printed in the summary, but
if you want to observe the stdout and stderr generated by a passing test,
you can pass the ``-s`` option to pytest as such::
tox -- -s
.. tip::
The ``-s`` option is `a pytest option <https://docs.pytest.org/latest/usage.html>`_.
Any options specified before the ``--`` separator are consumed by ``tox``,
and any options after the ``--`` separator will be passed along to pytest.
You can always abort on the first failure by running::
./setup.py test --addopts -x
tox -- -x
Similarly, you may also be interested in the ``--last-failed`` and
``--failed-first`` options as per the
`pytest cache <https://docs.pytest.org/en/latest/cache.html>`_ documentation.
If you want to run a specific test or a group of tests, you
can specify a prefix to match. E.g. if you want to run all of
the frontend tests you can do::
./setup.py test --addopts 'tests/frontend/'
tox -- tests/frontend/
Specific tests can be chosen by using the :: delimeter after the test module.
Specific tests can be chosen by using the :: delimiter after the test module.
If you wanted to run the test_build_track test within frontend/buildtrack.py you could do::
./setup.py test --addopts 'tests/frontend/buildtrack.py::test_build_track'
tox -- tests/frontend/buildtrack.py::test_build_track
When running only a few tests, you may find the coverage and timing output
excessive, there are options to trim them. Note that coverage step will fail.
Here is an example::
tox -- --no-cov --durations=1 tests/frontend/buildtrack.py::test_build_track
We also have a set of slow integration tests that are disabled by
default - you will notice most of them marked with SKIP in the pytest
output. To run them, you can use::
./setup.py test --addopts '--integration'
tox -- --integration
In case BuildStream's dependencies were updated since you last ran the
tests, you might see some errors like
``pytest: error: unrecognized arguments: --codestyle``. If this happens, you
will need to force ``tox`` to recreate the test environment(s). To do so, you
can run ``tox`` with ``-r`` or ``--recreate`` option.
.. note::
By default, we do not allow use of site packages in our ``tox``
configuration to enable running the tests in an isolated environment.
If you need to enable use of site packages for whatever reason, you can
do so by passing the ``--sitepackages`` option to ``tox``. Also, you will
not need to install any of the build dependencies mentioned above if you
use this approach.
.. note::
While using ``tox`` is practical for developers running tests in
more predictable execution environments, it is still possible to
execute the test suite against a specific installation environment
using pytest directly::
./setup.py test
Specific options can be passed to ``pytest`` using the ``--addopts``
option::
./setup.py test --addopts 'tests/frontend/buildtrack.py::test_build_track'
Observing coverage
~~~~~~~~~~~~~~~~~~
Once you have run the tests using `tox` (or `detox`), some coverage reports will
have been left behind.
By default, buildstream also runs pylint on all files. Should you want
to run just pylint (these checks are a lot faster), you can do so
with::
To view the coverage report of the last test run, simply run::
./setup.py test --addopts '-m pylint'
tox -e coverage
Alternatively, any IDE plugin that uses pytest should automatically
detect the ``.pylintrc`` in the project's root directory.
This will collate any reports from separate python environments that may be
under test before displaying the combined coverage.
Adding tests
~~~~~~~~~~~~
Tests are found in the tests subdirectory, inside of which
there is a separarate directory for each *domain* of tests.
there is a separate directory for each *domain* of tests.
All tests are collected as::
tests/*/*.py
......@@ -1525,6 +1625,51 @@ Tests that run a sandbox should be decorated with::
and use the integration cli helper.
You must test your changes in an end-to-end fashion. Consider the first end to
be the appropriate user interface, and the other end to be the change you have
made.
The aim for our tests is to make assertions about how you impact and define the
outward user experience. You should be able to exercise all code paths via the
user interface, just as one can test the strength of rivets by sailing dozens
of ocean liners. Keep in mind that your ocean liners could be sailing properly
*because* of a malfunctioning rivet. End-to-end testing will warn you that
fixing the rivet will sink the ships.
The primary user interface is the cli, so that should be the first target 'end'
for testing. Most of the value of BuildStream comes from what you can achieve
with the cli.
We also have what we call a *"Public API Surface"*, as previously mentioned in
:ref:`contributing_documenting_symbols`. You should consider this a secondary
target. This is mainly for advanced users to implement their plugins against.
Note that both of these targets for testing are guaranteed to continue working
in the same way across versions. This means that tests written in terms of them
will be robust to large changes to the code. This important property means that
BuildStream developers can make large refactorings without needing to rewrite
fragile tests.
Another user to consider is the BuildStream developer, therefore internal API
surfaces are also targets for testing. For example the YAML loading code, and
the CasCache. Remember that these surfaces are still just a means to the end of
providing value through the cli and the *"Public API Surface"*.
It may be impractical to sufficiently examine some changes in an end-to-end
fashion. The number of cases to test, and the running time of each test, may be
too high. Such typically low-level things, e.g. parsers, may also be tested
with unit tests; alongside the mandatory end-to-end tests.
It is important to write unit tests that are not fragile, i.e. in such a way
that they do not break due to changes unrelated to what they are meant to test.
For example, if the test relies on a lot of BuildStream internals, a large
refactoring will likely require the test to be rewritten. Pure functions that
only rely on the Python Standard Library are excellent candidates for unit
testing.
Unit tests only make it easier to implement things correctly, end-to-end tests
make it easier to implement the right thing.
Measuring performance
---------------------
......@@ -1616,10 +1761,8 @@ obtain profiles::
ForceCommand BST_PROFILE=artifact-receive cd /tmp && bst-artifact-receive --pull-url https://example.com/ /home/artifacts/artifacts
The MANIFEST.in and setup.py
----------------------------
When adding a dependency to BuildStream, it's important to update the setup.py accordingly.
Managing data files
-------------------
When adding data files which need to be discovered at runtime by BuildStream, update setup.py accordingly.
When adding data files for the purpose of docs or tests, or anything that is not covered by
......@@ -1629,3 +1772,23 @@ At any time, running the following command to create a source distribution shoul
creating a tarball which contains everything we want it to include::
./setup.py sdist
Updating BuildStream's Python dependencies
------------------------------------------
BuildStream's Python dependencies are listed in multiple
`requirements files <https://pip.readthedocs.io/en/latest/reference/pip_install/#requirements-file-format>`_
present in the ``requirements`` directory.
All ``.txt`` files in this directory are generated from the corresponding
``.in`` file, and each ``.in`` file represents a set of dependencies. For
example, ``requirements.in`` contains all runtime dependencies of BuildStream.
``requirements.txt`` is generated from it, and contains pinned versions of all
runtime dependencies (including transitive dependencies) of BuildStream.
When adding a new dependency to BuildStream, or updating existing dependencies,
it is important to update the appropriate requirements file accordingly. After
changing the ``.in`` file, run the following to update the matching ``.txt``
file::
make -C requirements
......@@ -8,19 +8,36 @@ include README.rst
# Documentation package includes
include doc/Makefile
include doc/badges.py
include doc/bst2html.py
include doc/source/conf.py
include doc/source/index.rst
include doc/source/plugin.rsttemplate
recursive-include doc/source *.rst
recursive-include doc/source *.py
recursive-include doc/source *.in
recursive-include doc/source *.html
recursive-include doc/source *.odg
recursive-include doc/source *.svg
recursive-include doc/examples *
recursive-include doc/sessions *.run
# Tests
recursive-include tests *.py
recursive-include tests *.yaml
recursive-include tests *.bst
recursive-include tests *.conf
recursive-include tests *.sh
recursive-include tests *.expected
recursive-include tests *
include conftest.py
include tox.ini
include .coveragerc
include .pylintrc
# Protocol Buffers
recursive-include buildstream/_protos *.proto
# Requirements files
include dev-requirements.txt
include requirements/requirements.in
include requirements/requirements.txt
include requirements/dev-requirements.in
include requirements/dev-requirements.txt
include requirements/plugin-requirements.in
include requirements/plugin-requirements.txt
# Versioneer
include versioneer.py
......@@ -2,6 +2,59 @@
buildstream 1.3.1
=================
o BREAKING CHANGE: The top level commands `checkout`, `push` and `pull` have
been moved to the `bst artifact` subcommand group and are now obsolete.
For example, you must now use `bst artifact pull hello.bst`.
The behaviour of `checkout` has changed. The previously mandatory LOCATION
argument should now be specified with the `--directory` option. In addition
to this, `--tar` is no longer a flag, it is a mutually incompatible option
to `--directory`. For example, `bst artifact checkout foo.bst --tar foo.tar.gz`.
o Added `bst artifact log` subcommand for viewing build logs.
o BREAKING CHANGE: The bst source-bundle command has been removed. The
functionality it provided has been replaced by the `--include-build-scripts`
option of the `bst source-checkout` command. To produce a tarball containing
an element's sources and generated build scripts you can do the command
`bst source-checkout --include-build-scripts --tar foo.bst some-file.tar`
o BREAKING CHANGE: `bst track` and `bst fetch` commands are now obsolete.
Their functionality is provided by `bst source track` and
`bst source fetch` respectively.
o Added new `bst source checkout` command to checkout sources of an element.
o BREAKING CHANGE: Default strip-commands have been removed as they are too
specific. Recommendation if you are building in Linux is to use the
ones being used in freedesktop-sdk project, for example
o Running commands without elements specified will now attempt to use
the default targets defined in the project configuration.
If no default target is defined, all elements in the project will be used.
o All elements must now be suffixed with `.bst`
Attempting to use an element that does not have the `.bst` extension,
will result in a warning.
o BREAKING CHANGE: The 'manual' element lost its default 'MAKEFLAGS' and 'V'
environment variables. There is already a 'make' element with the same
variables. Note that this is a breaking change, it will require users to
make changes to their .bst files if they are expecting these environment
variables to be set.
o BREAKING CHANGE: The 'auto-init' functionality has been removed. This would
offer to create a project in the event that bst was run against a directory
without a project, to be friendly to new users. It has been replaced with
an error message and a hint instead, to avoid bothering folks that just
made a mistake.
o BREAKING CHANGE: The unconditional 'Are you sure?' prompts have been
removed. These would always ask you if you were sure when running
'bst workspace close --remove-dir' or 'bst workspace reset'. They got in
the way too often.
o Failed builds are included in the cache as well.
`bst checkout` will provide anything in `%{install-root}`.
A build including cached fails will cause any dependant elements
......@@ -31,6 +84,49 @@ buildstream 1.3.1
new the `conf-root` variable to make the process easier. And there has been
a bug fix to workspaces so they can be build in workspaces too.
o Creating a build shell through the interactive mode or `bst shell --build`
will now use the cached build tree if available locally. It is now easier to
debug local build failures.
o `bst shell --sysroot` now takes any directory that contains a sysroot,
instead of just a specially-formatted build-root with a `root` and `scratch`
subdirectory.
o Due to the element `build tree` being cached in the respective artifact their
size in some cases has significantly increased. In *most* cases the build trees
are not utilised when building targets, as such by default bst 'pull' & 'build'
will not fetch build trees from remotes. This behaviour can be overridden with
the cli main option '--pull-buildtrees', or the user configuration cache group
option 'pull-buildtrees = True'. The override will also add the build tree to
already cached artifacts. When attempting to populate an artifactcache server
with cached artifacts, only 'complete' elements can be pushed. If the element
is expected to have a populated build tree then it must be cached before pushing.
o `bst workspace open` now supports the creation of multiple elements and
allows the user to set a default location for their creation. This has meant
that the new CLI is no longer backwards compatible with buildstream 1.2.
o Add sandbox API for command batching and use it for build, script, and
compose elements.
o BREAKING CHANGE: The `git` plugin does not create a local `.git`
repository by default. If `git describe` is required to work, the
plugin has now a tag tracking feature instead. This can be enabled
by setting 'track-tags'.
o Opening a workspace now creates a .bstproject.yaml file that allows buildstream
commands to be run from a workspace that is not inside a project.
o Specifying an element is now optional for some commands when buildstream is run
from inside a workspace - the 'build', 'checkout', 'fetch', 'pull', 'push',
'shell', 'show', 'source-checkout', 'track', 'workspace close' and 'workspace reset'
commands are affected.
o bst 'build' now has '--remote, -r' option, inline with bst 'push' & 'pull'.
Providing a remote will limit build's pull/push remote actions to the given
remote specifically, ignoring those defined via user or project configuration.
=================
buildstream 1.1.5
=================
......
......@@ -16,6 +16,9 @@ About
.. image:: https://img.shields.io/pypi/v/BuildStream.svg
:target: https://pypi.org/project/BuildStream
.. image:: https://app.fossa.io/api/projects/git%2Bgitlab.com%2FBuildStream%2Fbuildstream.svg?type=shield
:target: https://app.fossa.io/projects/git%2Bgitlab.com%2FBuildStream%2Fbuildstream?ref=badge_shield
What is BuildStream?
====================
......
......@@ -27,10 +27,15 @@ if "_BST_COMPLETION" not in os.environ:
del get_versions
from .utils import UtilError, ProgramNotFoundError
from .sandbox import Sandbox, SandboxFlags
from .types import Scope, Consistency
from .sandbox import Sandbox, SandboxFlags, SandboxCommandError
from .types import Scope, Consistency, CoreWarnings
from .plugin import Plugin
from .source import Source, SourceError, SourceFetcher
from .element import Element, ElementError
from .buildelement import BuildElement
from .scriptelement import ScriptElement
# XXX We are exposing a private member here as we expect it to move to a
# separate package soon. See the following discussion for more details:
# https://gitlab.com/BuildStream/buildstream/issues/739#note_124819869
from ._gitsourcebase import _GitSourceBase
......@@ -17,4 +17,5 @@
# Authors:
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
from .artifactcache import ArtifactCache, ArtifactCacheSpec, CACHE_SIZE_FILE
from .cascache import CASCache
from .casremote import CASRemote, CASRemoteSpec
from collections import namedtuple
import io
import os
import multiprocessing
import signal
from urllib.parse import urlparse
import uuid
import grpc
from .. import _yaml
from .._protos.google.rpc import code_pb2
from .._protos.google.bytestream import bytestream_pb2, bytestream_pb2_grpc
from .._protos.build.bazel.remote.execution.v2 import remote_execution_pb2, remote_execution_pb2_grpc
from .._protos.buildstream.v2 import buildstream_pb2, buildstream_pb2_grpc
from .._exceptions import CASRemoteError, LoadError, LoadErrorReason
from .. import _signals
from .. import utils
# The default limit for gRPC messages is 4 MiB.
# Limit payload to 1 MiB to leave sufficient headroom for metadata.
_MAX_PAYLOAD_BYTES = 1024 * 1024
class CASRemoteSpec(namedtuple('CASRemoteSpec', 'url push server_cert client_key client_cert instance_name')):
# _new_from_config_node
#
# Creates an CASRemoteSpec() from a YAML loaded node
#
@staticmethod
def _new_from_config_node(spec_node, basedir=None):
_yaml.node_validate(spec_node, ['url', 'push', 'server-cert', 'client-key', 'client-cert', 'instance_name'])
url = _yaml.node_get(spec_node, str, 'url')
push = _yaml.node_get(spec_node, bool, 'push', default_value=False)
if not url:
provenance = _yaml.node_get_provenance(spec_node, 'url')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: empty artifact cache URL".format(provenance))
instance_name = _yaml.node_get(spec_node, str, 'server-cert', default_value=None)
server_cert = _yaml.node_get(spec_node, str, 'server-cert', default_value=None)
if server_cert and basedir:
server_cert = os.path.join(basedir, server_cert)
client_key = _yaml.node_get(spec_node, str, 'client-key', default_value=None)
if client_key and basedir:
client_key = os.path.join(basedir, client_key)
client_cert = _yaml.node_get(spec_node, str, 'client-cert', default_value=None)
if client_cert and basedir:
client_cert = os.path.join(basedir, client_cert)
if client_key and not client_cert:
provenance = _yaml.node_get_provenance(spec_node, 'client-key')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: 'client-key' was specified without 'client-cert'".format(provenance))
if client_cert and not client_key:
provenance = _yaml.node_get_provenance(spec_node, 'client-cert')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: 'client-cert' was specified without 'client-key'".format(provenance))
return CASRemoteSpec(url, push, server_cert, client_key, client_cert, instance_name)
CASRemoteSpec.__new__.__defaults__ = (None, None, None, None)
class BlobNotFound(CASRemoteError):
def __init__(self, blob, msg):
self.blob = blob
super().__init__(msg)
# Represents a single remote CAS cache.
#
class CASRemote():
def __init__(self, spec):
self.spec = spec
self._initialized = False
self.channel = None
self.bytestream = None
self.cas = None
self.ref_storage = None
self.batch_update_supported = None
self.batch_read_supported = None
self.capabilities = None
self.max_batch_total_size_bytes = None
def init(self):
if not self._initialized:
url = urlparse(self.spec.url)
if url.scheme == 'http':
port = url.port or 80
self.channel = grpc.insecure_channel('{}:{}'.format(url.hostname, port))
elif url.scheme == 'https':
port = url.port or 443
if self.spec.server_cert:
with open(self.spec.server_cert, 'rb') as f:
server_cert_bytes = f.read()
else:
server_cert_bytes = None
if self.spec.client_key:
with open(self.spec.client_key, 'rb') as f:
client_key_bytes = f.read()
else:
client_key_bytes = None
if self.spec.client_cert:
with open(self.spec.client_cert, 'rb') as f:
client_cert_bytes = f.read()
else:
client_cert_bytes = None
credentials = grpc.ssl_channel_credentials(root_certificates=server_cert_bytes,
private_key=client_key_bytes,
certificate_chain=client_cert_bytes)
self.channel = grpc.secure_channel('{}:{}'.format(url.hostname, port), credentials)
else:
raise CASRemoteError("Unsupported URL: {}".format(self.spec.url))
self.bytestream = bytestream_pb2_grpc.ByteStreamStub(self.channel)
self.cas = remote_execution_pb2_grpc.ContentAddressableStorageStub(self.channel)
self.capabilities = remote_execution_pb2_grpc.CapabilitiesStub(self.channel)
self.ref_storage = buildstream_pb2_grpc.ReferenceStorageStub(self.channel)
self.max_batch_total_size_bytes = _MAX_PAYLOAD_BYTES
try:
request = remote_execution_pb2.GetCapabilitiesRequest()
response = self.capabilities.GetCapabilities(request)
server_max_batch_total_size_bytes = response.cache_capabilities.max_batch_total_size_bytes
if 0 < server_max_batch_total_size_bytes < self.max_batch_total_size_bytes:
self.max_batch_total_size_bytes = server_max_batch_total_size_bytes
except grpc.RpcError as e:
# Simply use the defaults for servers that don't implement GetCapabilities()
if e.code() != grpc.StatusCode.UNIMPLEMENTED:
raise
# Check whether the server supports BatchReadBlobs()
self.batch_read_supported = False
try:
request = remote_execution_pb2.BatchReadBlobsRequest()
response = self.cas.BatchReadBlobs(request)
self.batch_read_supported = True
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.UNIMPLEMENTED:
raise
# Check whether the server supports BatchUpdateBlobs()
self.batch_update_supported = False
try:
request = remote_execution_pb2.BatchUpdateBlobsRequest()
response = self.cas.BatchUpdateBlobs(request)
self.batch_update_supported = True
except grpc.RpcError as e:
if (e.code() != grpc.StatusCode.UNIMPLEMENTED and
e.code() != grpc.StatusCode.PERMISSION_DENIED):
raise
self._initialized = True
# check_remote
#
# Used when checking whether remote_specs work in the buildstream main
# thread, runs this in a seperate process to avoid creation of gRPC threads
# in the main BuildStream process
# See https://github.com/grpc/grpc/blob/master/doc/fork_support.md for details
@classmethod
def check_remote(cls, remote_spec, q):
def __check_remote():
try:
remote = cls(remote_spec)
remote.init()
request = buildstream_pb2.StatusRequest()
response = remote.ref_storage.Status(request)
if remote_spec.push and not response.allow_updates:
q.put('CAS server does not allow push')
else:
# No error
q.put(None)
except grpc.RpcError as e:
# str(e) is too verbose for errors reported to the user
q.put(e.details())
except Exception as e: # pylint: disable=broad-except
# Whatever happens, we need to return it to the calling process
#
q.put(str(e))
p = multiprocessing.Process(target=__check_remote)
try:
# Keep SIGINT blocked in the child process
with _signals.blocked([signal.SIGINT], ignore=False):
p.start()
error = q.get()
p.join()
except KeyboardInterrupt:
utils._kill_process_tree(p.pid)
raise
return error
# verify_digest_on_remote():
#
# Check whether the object is already on the server in which case
# there is no need to upload it.
#
# Args:
# digest (Digest): The object digest.
#
def verify_digest_on_remote(self, digest):
self.init()
request = remote_execution_pb2.FindMissingBlobsRequest()
request.blob_digests.extend([digest])
response = self.cas.FindMissingBlobs(request)
if digest in response.missing_blob_digests:
return False
return True
# push_message():
#
# Push the given protobuf message to a remote.
#
# Args:
# message (Message): A protobuf message to push.
#
# Raises:
# (CASRemoteError): if there was an error
#
def push_message(self, message):
message_buffer = message.SerializeToString()
message_digest = utils._message_digest(message_buffer)
self.init()
with io.BytesIO(message_buffer) as b:
self._send_blob(message_digest, b)
return message_digest
################################################
# Local Private Methods #
################################################
def _fetch_blob(self, digest, stream):
resource_name = '/'.join(['blobs', digest.hash, str(digest.size_bytes)])
request = bytestream_pb2.ReadRequest()
request.resource_name = resource_name
request.read_offset = 0
for response in self.bytestream.Read(request):
stream.write(response.data)
stream.flush()
assert digest.size_bytes == os.fstat(stream.fileno()).st_size
def _send_blob(self, digest, stream, u_uid=uuid.uuid4()):
resource_name = '/'.join(['uploads', str(u_uid), 'blobs',
digest.hash, str(digest.size_bytes)])
def request_stream(resname, instream):
offset = 0
finished = False
remaining = digest.size_bytes
while not finished:
chunk_size = min(remaining, _MAX_PAYLOAD_BYTES)
remaining -= chunk_size
request = bytestream_pb2.WriteRequest()
request.write_offset = offset
# max. _MAX_PAYLOAD_BYTES chunks
request.data = instream.read(chunk_size)
request.resource_name = resname
request.finish_write = remaining <= 0
yield request
offset += chunk_size
finished = request.finish_write
response = self.bytestream.Write(request_stream(resource_name, stream))
assert response.committed_size == digest.size_bytes
# Represents a batch of blobs queued for fetching.
#
class _CASBatchRead():
def __init__(self, remote):
self._remote = remote
self._max_total_size_bytes = remote.max_batch_total_size_bytes
self._request = remote_execution_pb2.BatchReadBlobsRequest()
self._size = 0
self._sent = False
def add(self, digest):
assert not self._sent
new_batch_size = self._size + digest.size_bytes
if new_batch_size > self._max_total_size_bytes:
# Not enough space left in current batch
return False
request_digest = self._request.digests.add()
request_digest.hash = digest.hash
request_digest.size_bytes = digest.size_bytes
self._size = new_batch_size
return True
def send(self):
assert not self._sent
self._sent = True
if not self._request.digests:
return
batch_response = self._remote.cas.BatchReadBlobs(self._request)
for response in batch_response.responses:
if response.status.code == code_pb2.NOT_FOUND:
raise BlobNotFound(response.digest.hash, "Failed to download blob {}: {}".format(
response.digest.hash, response.status.code))
if response.status.code != code_pb2.OK:
raise CASRemoteError("Failed to download blob {}: {}".format(
response.digest.hash, response.status.code))
if response.digest.size_bytes != len(response.data):
raise CASRemoteError("Failed to download blob {}: expected {} bytes, received {} bytes".format(
response.digest.hash, response.digest.size_bytes, len(response.data)))
yield (response.digest, response.data)
# Represents a batch of blobs queued for upload.
#
class _CASBatchUpdate():
def __init__(self, remote):
self._remote = remote
self._max_total_size_bytes = remote.max_batch_total_size_bytes
self._request = remote_execution_pb2.BatchUpdateBlobsRequest()
self._size = 0
self._sent = False
def add(self, digest, stream):
assert not self._sent
new_batch_size = self._size + digest.size_bytes
if new_batch_size > self._max_total_size_bytes:
# Not enough space left in current batch
return False
blob_request = self._request.requests.add()
blob_request.digest.hash = digest.hash
blob_request.digest.size_bytes = digest.size_bytes
blob_request.data = stream.read(digest.size_bytes)
self._size = new_batch_size
return True
def send(self):
assert not self._sent
self._sent = True
if not self._request.requests:
return
batch_response = self._remote.cas.BatchUpdateBlobs(self._request)
for response in batch_response.responses:
if response.status.code != code_pb2.OK:
raise CASRemoteError("Failed to upload blob {}: {}".format(
response.digest.hash, response.status.code))
......@@ -24,16 +24,20 @@ import signal
import sys
import tempfile
import uuid
import errno
import threading
import click
import grpc
import click
from .._protos.build.bazel.remote.execution.v2 import remote_execution_pb2, remote_execution_pb2_grpc
from .._protos.google.bytestream import bytestream_pb2, bytestream_pb2_grpc
from .._protos.buildstream.v2 import buildstream_pb2, buildstream_pb2_grpc
from .._protos.google.rpc import code_pb2
from .._exceptions import CASError
from .._exceptions import ArtifactError
from .._context import Context
from .cascache import CASCache
# The default limit for gRPC messages is 4 MiB.
......@@ -54,27 +58,28 @@ class ArtifactTooLargeException(Exception):
# repo (str): Path to CAS repository
# enable_push (bool): Whether to allow blob uploads and artifact updates
#
def create_server(repo, *, enable_push):
context = Context()
context.artifactdir = os.path.abspath(repo)
artifactcache = context.artifactcache
def create_server(repo, *, enable_push,
max_head_size=int(10e9),
min_head_size=int(2e9)):
cas = CASCache(os.path.abspath(repo))
# Use max_workers default from Python 3.5+
max_workers = (os.cpu_count() or 1) * 5
server = grpc.server(futures.ThreadPoolExecutor(max_workers))
cache_cleaner = _CacheCleaner(cas, max_head_size, min_head_size)
bytestream_pb2_grpc.add_ByteStreamServicer_to_server(
_ByteStreamServicer(artifactcache, enable_push=enable_push), server)
_ByteStreamServicer(cas, cache_cleaner, enable_push=enable_push), server)
remote_execution_pb2_grpc.add_ContentAddressableStorageServicer_to_server(
_ContentAddressableStorageServicer(artifactcache, enable_push=enable_push), server)
_ContentAddressableStorageServicer(cas, cache_cleaner, enable_push=enable_push), server)
remote_execution_pb2_grpc.add_CapabilitiesServicer_to_server(
_CapabilitiesServicer(), server)
buildstream_pb2_grpc.add_ReferenceStorageServicer_to_server(
_ReferenceStorageServicer(artifactcache, enable_push=enable_push), server)
_ReferenceStorageServicer(cas, enable_push=enable_push), server)
return server
......@@ -86,9 +91,19 @@ def create_server(repo, *, enable_push):
@click.option('--client-certs', help="Public client certificates for TLS (PEM-encoded)")
@click.option('--enable-push', default=False, is_flag=True,
help="Allow clients to upload blobs and update artifact cache")
@click.option('--head-room-min', type=click.INT,
help="Disk head room minimum in bytes",
default=2e9)
@click.option('--head-room-max', type=click.INT,
help="Disk head room maximum in bytes",
default=10e9)
@click.argument('repo')
def server_main(repo, port, server_key, server_cert, client_certs, enable_push):
server = create_server(repo, enable_push=enable_push)
def server_main(repo, port, server_key, server_cert, client_certs, enable_push,
head_room_min, head_room_max):
server = create_server(repo,
max_head_size=head_room_max,
min_head_size=head_room_min,
enable_push=enable_push)
use_tls = bool(server_key)
......@@ -130,10 +145,11 @@ def server_main(repo, port, server_key, server_cert, client_certs, enable_push):
class _ByteStreamServicer(bytestream_pb2_grpc.ByteStreamServicer):
def __init__(self, cas, *, enable_push):
def __init__(self, cas, cache_cleaner, *, enable_push):
super().__init__()
self.cas = cas
self.enable_push = enable_push
self.cache_cleaner = cache_cleaner
def Read(self, request, context):
resource_name = request.resource_name
......@@ -191,17 +207,34 @@ class _ByteStreamServicer(bytestream_pb2_grpc.ByteStreamServicer):
context.set_code(grpc.StatusCode.NOT_FOUND)
return response
while True:
if client_digest.size_bytes == 0:
break
try:
_clean_up_cache(self.cas, client_digest.size_bytes)
self.cache_cleaner.clean_up(client_digest.size_bytes)
except ArtifactTooLargeException as e:
context.set_code(grpc.StatusCode.RESOURCE_EXHAUSTED)
context.set_details(str(e))
return response
try:
os.posix_fallocate(out.fileno(), 0, client_digest.size_bytes)
break
except OSError as e:
# Multiple upload can happen in the same time
if e.errno != errno.ENOSPC:
raise
elif request.resource_name:
# If it is set on subsequent calls, it **must** match the value of the first request.
if request.resource_name != resource_name:
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
return response
if (offset + len(request.data)) > client_digest.size_bytes:
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
return response
out.write(request.data)
offset += len(request.data)
if request.finish_write:
......@@ -209,7 +242,7 @@ class _ByteStreamServicer(bytestream_pb2_grpc.ByteStreamServicer):
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
return response
out.flush()
digest = self.cas.add_object(path=out.name)
digest = self.cas.add_object(path=out.name, link_directly=True)
if digest.hash != client_digest.hash:
context.set_code(grpc.StatusCode.FAILED_PRECONDITION)
return response
......@@ -222,18 +255,26 @@ class _ByteStreamServicer(bytestream_pb2_grpc.ByteStreamServicer):
class _ContentAddressableStorageServicer(remote_execution_pb2_grpc.ContentAddressableStorageServicer):
def __init__(self, cas, *, enable_push):
def __init__(self, cas, cache_cleaner, *, enable_push):
super().__init__()
self.cas = cas
self.enable_push = enable_push
self.cache_cleaner = cache_cleaner
def FindMissingBlobs(self, request, context):
response = remote_execution_pb2.FindMissingBlobsResponse()
for digest in request.blob_digests:
if not _has_object(self.cas, digest):
objpath = self.cas.objpath(digest)
try:
os.utime(objpath)
except OSError as e:
if e.errno != errno.ENOENT:
raise
else:
d = response.missing_blob_digests.add()
d.hash = digest.hash
d.size_bytes = digest.size_bytes
return response
def BatchReadBlobs(self, request, context):
......@@ -252,12 +293,12 @@ class _ContentAddressableStorageServicer(remote_execution_pb2_grpc.ContentAddres
try:
with open(self.cas.objpath(digest), 'rb') as f:
if os.fstat(f.fileno()).st_size != digest.size_bytes:
blob_response.status.code = grpc.StatusCode.NOT_FOUND
blob_response.status.code = code_pb2.NOT_FOUND
continue
blob_response.data = f.read(digest.size_bytes)
except FileNotFoundError:
blob_response.status.code = grpc.StatusCode.NOT_FOUND
blob_response.status.code = code_pb2.NOT_FOUND
return response
......@@ -283,21 +324,21 @@ class _ContentAddressableStorageServicer(remote_execution_pb2_grpc.ContentAddres
blob_response.digest.size_bytes = digest.size_bytes
if len(blob_request.data) != digest.size_bytes:
blob_response.status.code = grpc.StatusCode.FAILED_PRECONDITION
blob_response.status.code = code_pb2.FAILED_PRECONDITION
continue
try:
_clean_up_cache(self.cas, digest.size_bytes)
self.cache_cleaner.clean_up(digest.size_bytes)
with tempfile.NamedTemporaryFile(dir=self.cas.tmpdir) as out:
out.write(blob_request.data)
out.flush()
server_digest = self.cas.add_object(path=out.name)
if server_digest.hash != digest.hash:
blob_response.status.code = grpc.StatusCode.FAILED_PRECONDITION
blob_response.status.code = code_pb2.FAILED_PRECONDITION
except ArtifactTooLargeException:
blob_response.status.code = grpc.StatusCode.RESOURCE_EXHAUSTED
blob_response.status.code = code_pb2.RESOURCE_EXHAUSTED
return response
......@@ -330,10 +371,16 @@ class _ReferenceStorageServicer(buildstream_pb2_grpc.ReferenceStorageServicer):
try:
tree = self.cas.resolve_ref(request.key, update_mtime=True)
try:
self.cas.update_tree_mtime(tree)
except FileNotFoundError:
self.cas.remove(request.key, defer_prune=True)
context.set_code(grpc.StatusCode.NOT_FOUND)
return response
response.digest.hash = tree.hash
response.digest.size_bytes = tree.size_bytes
except ArtifactError:
except CASError:
context.set_code(grpc.StatusCode.NOT_FOUND)
return response
......@@ -402,10 +449,26 @@ def _digest_from_upload_resource_name(resource_name):
return None
def _has_object(cas, digest):
objpath = cas.objpath(digest)
return os.path.exists(objpath)
class _CacheCleaner:
__cleanup_cache_lock = threading.Lock()
def __init__(self, cas, max_head_size, min_head_size=int(2e9)):
self.__cas = cas
self.__max_head_size = max_head_size
self.__min_head_size = min_head_size
def __has_space(self, object_size):
stats = os.statvfs(self.__cas.casdir)
free_disk_space = (stats.f_bavail * stats.f_bsize) - self.__min_head_size
total_disk_space = (stats.f_blocks * stats.f_bsize) - self.__min_head_size
if object_size > total_disk_space:
raise ArtifactTooLargeException("Artifact of size: {} is too large for "
"the filesystem which mounts the remote "
"cache".format(object_size))
return object_size <= free_disk_space
# _clean_up_cache()
#
......@@ -413,36 +476,32 @@ def _has_object(cas, digest):
# is enough space for the incoming artifact
#
# Args:
# cas: CASCache object
# object_size: The size of the object being received in bytes
#
# Returns:
# int: The total bytes removed on the filesystem
#
def _clean_up_cache(cas, object_size):
# Determine the available disk space, in bytes, of the file system
# which mounts the repo
stats = os.statvfs(cas.casdir)
buffer_ = int(2e9) # Add a 2 GB buffer
free_disk_space = (stats.f_bfree * stats.f_bsize) - buffer_
total_disk_space = (stats.f_blocks * stats.f_bsize) - buffer_
if object_size > total_disk_space:
raise ArtifactTooLargeException("Artifact of size: {} is too large for "
"the filesystem which mounts the remote "
"cache".format(object_size))
def clean_up(self, object_size):
if self.__has_space(object_size):
return 0
if object_size <= free_disk_space:
# No need to clean up
with _CacheCleaner.__cleanup_cache_lock:
if self.__has_space(object_size):
# Another thread has done the cleanup for us
return 0
stats = os.statvfs(self.__cas.casdir)
target_disk_space = (stats.f_bavail * stats.f_bsize) - self.__max_head_size
# obtain a list of LRP artifacts
LRP_artifacts = cas.list_artifacts()
LRP_objects = self.__cas.list_objects()
removed_size = 0 # in bytes
while object_size - removed_size > free_disk_space:
last_mtime = 0
while object_size - removed_size > target_disk_space:
try:
to_remove = LRP_artifacts.pop(0) # The first element in the list is the LRP artifact
last_mtime, to_remove = LRP_objects.pop(0) # The first element in the list is the LRP artifact
except IndexError:
# This exception is caught if there are no more artifacts in the list
# LRP_artifacts. This means the the artifact is too large for the filesystem
......@@ -451,7 +510,14 @@ def _clean_up_cache(cas, object_size):
"the filesystem which mounts the remote "
"cache".format(object_size))
removed_size += cas.remove(to_remove, defer_prune=False)
try:
size = os.stat(to_remove).st_size
os.unlink(to_remove)
removed_size += size
except FileNotFoundError:
pass
self.__cas.clean_up_refs_until(last_mtime)
if removed_size > 0:
logging.info("Successfully removed {} bytes from the cache".format(removed_size))
......
......@@ -30,10 +30,11 @@ from . import _yaml
from ._exceptions import LoadError, LoadErrorReason, BstError
from ._message import Message, MessageType
from ._profile import Topics, profile_start, profile_end
from ._artifactcache import ArtifactCache
from ._artifactcache.cascache import CASCache
from ._workspaces import Workspaces
from ._artifactcache import ArtifactCache, ArtifactCacheUsage
from ._cas import CASCache
from ._workspaces import Workspaces, WorkspaceProjectCache
from .plugin import _plugin_lookup
from .sandbox import SandboxRemote
# Context()
......@@ -47,9 +48,12 @@ from .plugin import _plugin_lookup
# verbosity levels and basically anything pertaining to the context
# in which BuildStream was invoked.
#
# Args:
# directory (str): The directory that buildstream was invoked in
#
class Context():
def __init__(self):
def __init__(self, directory=None):
# Filename indicating which configuration file was used, or None for the defaults
self.config_origin = None
......@@ -60,29 +64,35 @@ class Context():
# The directory where build sandboxes will be created
self.builddir = None
# Default root location for workspaces
self.workspacedir = None
# The local binary artifact cache directory
self.artifactdir = None
# The locations from which to push and pull prebuilt artifacts
self.artifact_cache_specs = []
self.artifact_cache_specs = None
# The global remote execution configuration
self.remote_execution_specs = None
# The directory to store build logs
self.logdir = None
# The abbreviated cache key length to display in the UI
self.log_key_length = 0
self.log_key_length = None
# Whether debug mode is enabled
self.log_debug = False
self.log_debug = None
# Whether verbose mode is enabled
self.log_verbose = False
self.log_verbose = None
# Maximum number of lines to print from build logs
self.log_error_lines = 0
self.log_error_lines = None
# Maximum number of lines to print in the master log for a detailed message
self.log_message_lines = 0
self.log_message_lines = None
# Format string for printing the pipeline at startup time
self.log_element_format = None
......@@ -91,19 +101,29 @@ class Context():
self.log_message_format = None
# Maximum number of fetch or refresh tasks
self.sched_fetchers = 4
self.sched_fetchers = None
# Maximum number of build tasks
self.sched_builders = 4
self.sched_builders = None
# Maximum number of push tasks
self.sched_pushers = 4
self.sched_pushers = None
# Maximum number of retries for network tasks
self.sched_network_retries = 2
self.sched_network_retries = None
# What to do when a build fails in non interactive mode
self.sched_error_action = 'continue'
self.sched_error_action = None
# Size of the artifact cache in bytes
self.config_cache_quota = None
# Whether or not to attempt to pull build trees globally
self.pull_buildtrees = None
# Boolean, whether we double-check with the user that they meant to
# close the workspace when they're using it to access the project.
self.prompt_workspace_close_project_inaccessible = None
# Whether elements must be rebuilt when their dependencies have changed
self._strict_build_plan = None
......@@ -119,9 +139,11 @@ class Context():
self._projects = []
self._project_overrides = {}
self._workspaces = None
self._workspace_project_cache = WorkspaceProjectCache()
self._log_handle = None
self._log_filename = None
self.config_cache_quota = 'infinity'
self._cascache = None
self._directory = directory
# load()
#
......@@ -161,10 +183,10 @@ class Context():
_yaml.node_validate(defaults, [
'sourcedir', 'builddir', 'artifactdir', 'logdir',
'scheduler', 'artifacts', 'logging', 'projects',
'cache'
'cache', 'prompt', 'workspacedir', 'remote-execution'
])
for directory in ['sourcedir', 'builddir', 'artifactdir', 'logdir']:
for directory in ['sourcedir', 'builddir', 'artifactdir', 'logdir', 'workspacedir']:
# Allow the ~ tilde expansion and any environment variables in
# path specification in the config files.
#
......@@ -179,13 +201,18 @@ class Context():
# our artifactdir - the artifactdir may not have been created
# yet.
cache = _yaml.node_get(defaults, Mapping, 'cache')
_yaml.node_validate(cache, ['quota'])
_yaml.node_validate(cache, ['quota', 'pull-buildtrees'])
self.config_cache_quota = _yaml.node_get(cache, str, 'quota', default_value='infinity')
self.config_cache_quota = _yaml.node_get(cache, str, 'quota')
# Load artifact share configuration
self.artifact_cache_specs = ArtifactCache.specs_from_config_node(defaults)
self.remote_execution_specs = SandboxRemote.specs_from_config_node(defaults)
# Load pull build trees configuration
self.pull_buildtrees = _yaml.node_get(cache, bool, 'pull-buildtrees')
# Load logging config
logging = _yaml.node_get(defaults, Mapping, 'logging')
_yaml.node_validate(logging, [
......@@ -207,36 +234,57 @@ class Context():
'on-error', 'fetchers', 'builders',
'pushers', 'network-retries'
])
self.sched_error_action = _yaml.node_get(scheduler, str, 'on-error')
self.sched_error_action = _node_get_option_str(
scheduler, 'on-error', ['continue', 'quit', 'terminate'])
self.sched_fetchers = _yaml.node_get(scheduler, int, 'fetchers')
self.sched_builders = _yaml.node_get(scheduler, int, 'builders')
self.sched_pushers = _yaml.node_get(scheduler, int, 'pushers')
self.sched_network_retries = _yaml.node_get(scheduler, int, 'network-retries')
# Load prompt preferences
#
# We convert string options to booleans here, so we can be both user
# and coder-friendly. The string options are worded to match the
# responses the user would give at the cli, for least surprise. The
# booleans are converted here because it's easiest to eyeball that the
# strings are right.
#
prompt = _yaml.node_get(
defaults, Mapping, 'prompt')
_yaml.node_validate(prompt, [
'really-workspace-close-project-inaccessible',
])
self.prompt_workspace_close_project_inaccessible = _node_get_option_str(
prompt, 'really-workspace-close-project-inaccessible', ['ask', 'yes']) == 'ask'
# Load per-projects overrides
self._project_overrides = _yaml.node_get(defaults, Mapping, 'projects', default_value={})
# Shallow validation of overrides, parts of buildstream which rely
# on the overrides are expected to validate elsewhere.
for _, overrides in _yaml.node_items(self._project_overrides):
_yaml.node_validate(overrides, ['artifacts', 'options', 'strict', 'default-mirror'])
_yaml.node_validate(overrides, ['artifacts', 'options', 'strict', 'default-mirror',
'remote-execution'])
profile_end(Topics.LOAD_CONTEXT, 'load')
valid_actions = ['continue', 'quit']
if self.sched_error_action not in valid_actions:
provenance = _yaml.node_get_provenance(scheduler, 'on-error')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: on-error should be one of: {}".format(
provenance, ", ".join(valid_actions)))
@property
def artifactcache(self):
if not self._artifactcache:
self._artifactcache = CASCache(self)
self._artifactcache = ArtifactCache(self)
return self._artifactcache
# get_artifact_cache_usage()
#
# Fetches the current usage of the artifact cache
#
# Returns:
# (ArtifactCacheUsage): The current status
#
def get_artifact_cache_usage(self):
return ArtifactCacheUsage(self.artifactcache)
# add_project():
#
# Add a project to the context.
......@@ -246,7 +294,7 @@ class Context():
#
def add_project(self, project):
if not self._projects:
self._workspaces = Workspaces(project)
self._workspaces = Workspaces(project, self._workspace_project_cache)
self._projects.append(project)
# get_projects():
......@@ -265,14 +313,31 @@ class Context():
# invoked with as opposed to a junctioned subproject.
#
# Returns:
# (list): The list of projects
# (Project): The Project object
#
def get_toplevel_project(self):
return self._projects[0]
# get_workspaces():
#
# Return a Workspaces object containing a list of workspaces.
#
# Returns:
# (Workspaces): The Workspaces object
#
def get_workspaces(self):
return self._workspaces
# get_workspace_project_cache():
#
# Return the WorkspaceProjectCache object used for this BuildStream invocation
#
# Returns:
# (WorkspaceProjectCache): The WorkspaceProjectCache object
#
def get_workspace_project_cache(self):
return self._workspace_project_cache
# get_overrides():
#
# Fetch the override dictionary for the active project. This returns
......@@ -364,7 +429,6 @@ class Context():
assert self._message_handler
self._message_handler(message, context=self)
return
# silence()
#
......@@ -583,3 +647,35 @@ class Context():
os.environ['XDG_CONFIG_HOME'] = os.path.expanduser('~/.config')
if not os.environ.get('XDG_DATA_HOME'):
os.environ['XDG_DATA_HOME'] = os.path.expanduser('~/.local/share')
def get_cascache(self):
if self._cascache is None:
self._cascache = CASCache(self.artifactdir)
return self._cascache
# _node_get_option_str()
#
# Like _yaml.node_get(), but also checks value is one of the allowed option
# strings. Fetches a value from a dictionary node, and makes sure it's one of
# the pre-defined options.
#
# Args:
# node (dict): The dictionary node
# key (str): The key to get a value for in node
# allowed_options (iterable): Only accept these values
#
# Returns:
# The value, if found in 'node'.
#
# Raises:
# LoadError, when the value is not of the expected type, or is not found.
#
def _node_get_option_str(node, key, allowed_options):
result = _yaml.node_get(node, str, key)
if result not in allowed_options:
provenance = _yaml.node_get_provenance(node, key)
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: {} should be one of: {}".format(
provenance, key, ", ".join(allowed_options)))
return result
......@@ -47,7 +47,6 @@ class ElementFactory(PluginContext):
# Args:
# context (object): The Context object for processing
# project (object): The project object
# artifacts (ArtifactCache): The artifact cache
# meta (object): The loaded MetaElement
#
# Returns: A newly created Element object of the appropriate kind
......@@ -56,9 +55,9 @@ class ElementFactory(PluginContext):
# PluginError (if the kind lookup failed)
# LoadError (if the element itself took issue with the config)
#
def create(self, context, project, artifacts, meta):
def create(self, context, project, meta):
element_type, default_config = self.lookup(meta.kind)
element = element_type(context, project, artifacts, meta, default_config)
element = element_type(context, project, meta, default_config)
version = self._format_versions.get(meta.kind, 0)
self._assert_plugin_format(element, version)
return element
......@@ -90,6 +90,7 @@ class ErrorDomain(Enum):
APP = 12
STREAM = 13
VIRTUAL_FS = 14
CAS = 15
# BstError is an internal base exception class for BuildSream
......@@ -111,10 +112,8 @@ class BstError(Exception):
#
self.detail = detail
# The build sandbox in which the error occurred, if the
# error occurred at element assembly time.
#
self.sandbox = None
# A sandbox can be created to debug this error
self.sandbox = False
# When this exception occurred during the handling of a job, indicate
# whether or not there is any point retrying the job.
......@@ -263,8 +262,8 @@ class PlatformError(BstError):
# Raised when errors are encountered by the sandbox implementation
#
class SandboxError(BstError):
def __init__(self, message, reason=None):
super().__init__(message, domain=ErrorDomain.SANDBOX, reason=reason)
def __init__(self, message, detail=None, reason=None):
super().__init__(message, detail=detail, domain=ErrorDomain.SANDBOX, reason=reason)
# ArtifactError
......@@ -276,6 +275,30 @@ class ArtifactError(BstError):
super().__init__(message, detail=detail, domain=ErrorDomain.ARTIFACT, reason=reason, temporary=True)
# CASError
#
# Raised when errors are encountered in the CAS
#
class CASError(BstError):
def __init__(self, message, *, detail=None, reason=None, temporary=False):
super().__init__(message, detail=detail, domain=ErrorDomain.CAS, reason=reason, temporary=True)
# CASRemoteError
#
# Raised when errors are encountered in the remote CAS
class CASRemoteError(CASError):
pass
# CASCacheError
#
# Raised when errors are encountered in the local CASCacheError
#
class CASCacheError(CASError):
pass
# PipelineError
#
# Raised from pipeline operations
......
......@@ -20,7 +20,6 @@
from contextlib import contextmanager
import os
import sys
import resource
import traceback
import datetime
from textwrap import TextWrapper
......@@ -39,7 +38,7 @@ from .._message import Message, MessageType, unconditional_messages
from .._stream import Stream
from .._versions import BST_FORMAT_VERSION
from .. import _yaml
from .._scheduler import ElementJob
from .._scheduler import ElementJob, JobStatus
# Import frontend assets
from . import Profile, LogLine, Status
......@@ -165,7 +164,7 @@ class App():
# Load the Context
#
try:
self.context = Context()
self.context = Context(directory)
self.context.load(config)
except BstError as e:
self._error_exit(e, "Error loading user configuration")
......@@ -183,7 +182,8 @@ class App():
'fetchers': 'sched_fetchers',
'builders': 'sched_builders',
'pushers': 'sched_pushers',
'network_retries': 'sched_network_retries'
'network_retries': 'sched_network_retries',
'pull_buildtrees': 'pull_buildtrees'
}
for cli_option, context_attr in override_map.items():
option_value = self._main_options.get(cli_option)
......@@ -194,11 +194,6 @@ class App():
except BstError as e:
self._error_exit(e, "Error instantiating platform")
try:
self.context.artifactcache.preflight()
except BstError as e:
self._error_exit(e, "Error instantiating artifact cache")
# Create the logger right before setting the message handler
self.logger = LogLine(self.context,
self._content_profile,
......@@ -211,6 +206,13 @@ class App():
# Propagate pipeline feedback to the user
self.context.set_message_handler(self._message_handler)
# Preflight the artifact cache after initializing logging,
# this can cause messages to be emitted.
try:
self.context.artifactcache.preflight()
except BstError as e:
self._error_exit(e, "Error instantiating artifact cache")
#
# Load the Project
#
......@@ -219,12 +221,13 @@ class App():
default_mirror=self._main_options.get('default_mirror'))
except LoadError as e:
# Let's automatically start a `bst init` session in this case
if e.reason == LoadErrorReason.MISSING_PROJECT_CONF and self.interactive:
click.echo("A project was not detected in the directory: {}".format(directory), err=True)
# Help users that are new to BuildStream by suggesting 'init'.
# We don't want to slow down users that just made a mistake, so
# don't stop them with an offer to create a project for them.
if e.reason == LoadErrorReason.MISSING_PROJECT_CONF:
click.echo("No project found. You can create a new project like so:", err=True)
click.echo("", err=True)
if click.confirm("Would you like to create a new project here ?"):
self.init_project(None)
click.echo(" bst init", err=True)
self._error_exit(e, "Error loading project")
......@@ -306,7 +309,6 @@ class App():
directory = self._main_options['directory']
directory = os.path.abspath(directory)
project_path = os.path.join(directory, 'project.conf')
elements_path = os.path.join(directory, element_path)
try:
# Abort if the project.conf already exists, unless `--force` was specified in `bst init`
......@@ -336,6 +338,7 @@ class App():
raise AppError("Error creating project directory {}: {}".format(directory, e)) from e
# Create the elements sub-directory if it doesnt exist
elements_path = os.path.join(directory, element_path)
try:
os.makedirs(elements_path, exist_ok=True)
except IOError as e:
......@@ -514,13 +517,13 @@ class App():
self._status.add_job(job)
self._maybe_render_status()
def _job_completed(self, job, success):
def _job_completed(self, job, status):
self._status.remove_job(job)
self._maybe_render_status()
# Dont attempt to handle a failure if the user has already opted to
# terminate
if not success and not self.stream.terminated:
if status == JobStatus.FAIL and not self.stream.terminated:
if isinstance(job, ElementJob):
element = job.element
......@@ -598,7 +601,7 @@ class App():
click.echo("\nDropping into an interactive shell in the failed build sandbox\n", err=True)
try:
prompt = self.shell_prompt(element)
self.stream.shell(element, Scope.BUILD, prompt, directory=failure.sandbox, isolate=True)
self.stream.shell(element, Scope.BUILD, prompt, isolate=True, usebuildtree=True)
except BstError as e:
click.echo("Error while attempting to create interactive shell: {}".format(e), err=True)
elif choice == 'log':
......
This diff is collapsed.
......@@ -31,7 +31,7 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
import collections
import collections.abc
import copy
import os
......@@ -203,7 +203,7 @@ def is_incomplete_option(all_args, cmd_param):
if start_of_option(arg_str):
last_option = arg_str
return True if last_option and last_option in cmd_param.opts else False
return bool(last_option and last_option in cmd_param.opts)
def is_incomplete_argument(current_params, cmd_param):
......@@ -218,7 +218,7 @@ def is_incomplete_argument(current_params, cmd_param):
return True
if cmd_param.nargs == -1:
return True
if isinstance(current_param_values, collections.Iterable) \
if isinstance(current_param_values, collections.abc.Iterable) \
and cmd_param.nargs > 1 and len(current_param_values) < cmd_param.nargs:
return True
return False
......@@ -297,12 +297,15 @@ def get_choices(cli, prog_name, args, incomplete, override):
if not found_param and isinstance(ctx.command, MultiCommand):
# completion for any subcommands
choices.extend([cmd + " " for cmd in ctx.command.list_commands(ctx)])
choices.extend([cmd + " " for cmd in ctx.command.list_commands(ctx)
if not ctx.command.get_command(ctx, cmd).hidden])
if not start_of_option(incomplete) and ctx.parent is not None \
and isinstance(ctx.parent.command, MultiCommand) and ctx.parent.command.chain:
# completion for chained commands
remaining_comands = set(ctx.parent.command.list_commands(ctx.parent)) - set(ctx.parent.protected_args)
visible_commands = [cmd for cmd in ctx.parent.command.list_commands(ctx.parent)
if not ctx.parent.command.get_command(ctx.parent, cmd).hidden]
remaining_comands = set(visible_commands) - set(ctx.parent.protected_args)
choices.extend([cmd + " " for cmd in remaining_comands])
for item in choices:
......
......@@ -18,8 +18,8 @@
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
import os
import sys
import click
import curses
import click
# Import a widget internal for formatting time codes
from .widget import TimeCode
......@@ -353,13 +353,17 @@ class _StatusHeader():
def render(self, line_length, elapsed):
project = self._context.get_toplevel_project()
line_length = max(line_length, 80)
size = 0
text = ''
#
# Line 1: Session time, project name, session / total elements
#
# ========= 00:00:00 project-name (143/387) =========
#
session = str(len(self._stream.session_elements))
total = str(len(self._stream.total_elements))
# Format and calculate size for target and overall time code
size = 0
text = ''
size += len(total) + len(session) + 4 # Size for (N/N) with a leading space
size += 8 # Size of time code
size += len(project.name) + 1
......@@ -372,6 +376,12 @@ class _StatusHeader():
self._format_profile.fmt(')')
line1 = self._centered(text, size, line_length, '=')
#
# Line 2: Dynamic list of queue status reports
#
# (Fetched:0 117 0)→ (Built:4 0 0)
#
size = 0
text = ''
......@@ -389,10 +399,28 @@ class _StatusHeader():
line2 = self._centered(text, size, line_length, ' ')
size = 24
text = self._format_profile.fmt("~~~~~ ") + \
self._content_profile.fmt('Active Tasks') + \
self._format_profile.fmt(" ~~~~~")
#
# Line 3: Cache usage percentage report
#
# ~~~~~~ cache: 69% ~~~~~~
#
usage = self._context.get_artifact_cache_usage()
usage_percent = '{}%'.format(usage.used_percent)
size = 21
size += len(usage_percent)
if usage.used_percent >= 95:
formatted_usage_percent = self._error_profile.fmt(usage_percent)
elif usage.used_percent >= 80:
formatted_usage_percent = self._content_profile.fmt(usage_percent)
else:
formatted_usage_percent = self._success_profile.fmt(usage_percent)
text = self._format_profile.fmt("~~~~~~ ") + \
self._content_profile.fmt('cache') + \
self._format_profile.fmt(': ') + \
formatted_usage_percent + \
self._format_profile.fmt(' ~~~~~~')
line3 = self._centered(text, size, line_length, ' ')
return line1 + '\n' + line2 + '\n' + line3
......