Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 108-integration-tests-not-idempotent-and-self-contained
  • 131-behavior-of-except-argument-is-frustrating-and-confusing
  • 132-loading-external-plugins-works-without-explicit-requirement-in-project-conf
  • 135-expire-artifacts-in-local-cache
  • 135-expire-artifacts-in-local-cache-clean
  • 138-aborting-bst-push-command-causes-stack-trace-3
  • 142-potentially-printing-provenance-more-than-once-in-loaderrors
  • 188-trigger-external-commands-on-certain-events
  • 214-filter-workspacing-rework
  • 218-allow-specifying-the-chroot-binary-to-use-for-sandboxes-on-unix-platforms
  • 239-use-pylint-for-linting
  • 372-allow-queues-to-run-auxilliary-jobs-after-an-element-s-job-finishes
  • 380-untagged-bst
  • 463-make-dependency-type-default-to-build
  • 537-mirror-fallback-does-not-work-for-git
  • 64-clarify-about-plugins-importing-other-plugins
  • 716-add-example-with-build-directory-outside-of-source-directory
  • 716-add-example-with-build-directory-outside-of-source-directory-2
  • 81-non-empty-read-only-directories-not-handled-during-bst-build-and-others
  • BenjaminSchubert/fix-quota-tests
  • Qinusty/235-manifest
  • Qinusty/397
  • Qinusty/470-bst-track-yaml-indent
  • Qinusty/553-backport-1.2
  • Qinusty/663-missing-cache-key-workspace-open
  • Qinusty/backport-576
  • Qinusty/backport-skipped-562
  • Qinusty/gitlab-ci
  • Qinusty/gitlab-ci-duration
  • Qinusty/message-helpers
  • Qinusty/pytest_cache_gitignore
  • abderrahim/cached-failure
  • abderrahim/cachekey-strictrebuild
  • abderrahim/cleanup-speedup
  • abderrahim/makemaker
  • abderrahim/resolve-remotes
  • abderrahim/source-cache
  • abderrahim/stage-artifact-scriptelement
  • abderrahim/virtual-extract
  • adamjones/contributing
  • adamjones/contribution-guide
  • aevri/assert_no_unexpected_size_writes
  • aevri/casdprocessmanager2
  • aevri/check_spawn_ci_working
  • aevri/enable_spawn_ci_4
  • aevri/enable_spawn_ci_6
  • aevri/enable_spawn_ci_7
  • aevri/json_artifact_meta
  • aevri/picklable_jobs
  • aevri/plugin_venvs
  • aevri/provenance_scope
  • aevri/pylint_ignore_argsdiff
  • aevri/safe_noninteractive
  • aevri/win32
  • aevri/win32_minimal
  • aevri/win32_minimal_seemstowork_20190829
  • aevri/win32_receive_signals
  • aevri/win32_temptext
  • alexfazakas/add-bst-init-argument
  • alexfazakas/use-merge-trains
  • always-do-linting
  • another-segfault
  • becky/locally_downloaded_files
  • becky/shell_launch_errors
  • bschubert/add-isolated-tests
  • bschubert/isort
  • bschubert/merge-parent-child-job
  • bschubert/more-mypy
  • bschubert/no-multiprocessing-bak
  • bschubert/no-multiprocessing-full
  • bschubert/optimize-deps
  • bschubert/optimize-element-init
  • bschubert/optimize-loader-sorting
  • bschubert/optimize-mapping-node
  • bschubert/optimize-splits
  • bschubert/remove-multiline-switch-for-re
  • bschubert/remove-parent-child-pipe
  • bschubert/remove-pip-source
  • bschubert/standardize-source-tests
  • bschubert/test-plugins
  • bschubert/update-coverage
  • bst-1
  • bst-1.0
  • bst-1.2
  • bst-1.4
  • bst-pull
  • bst-push
  • buildbox-pre-will
  • cache-key-v0
  • caching_build_trees
  • cascache_timeouts
  • chandan/automate-pypi-release
  • chandan/cli-deps
  • chandan/contrib-dependencies
  • chandan/element-cache
  • chandan/enums
  • chandan/extras-require
  • chandan/macos-multiprocessing
  • chandan/moar-parallelism
  • chandan/moar-runners
  • 1.0.0
  • 1.0.1
  • 1.1.0
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.1.5
  • 1.1.6
  • 1.1.7
  • 1.2.0
  • 1.2.1
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.7
  • 1.2.8
  • 1.3.0
  • 1.3.1
  • 1.4.0
  • 1.4.1
  • 1.4.2
  • 1.4.3
  • 1.5.0
  • 1.5.1
  • 1.6.0
  • 1.6.1
  • 1.91.0
  • 1.91.1
  • 1.91.2
  • 1.91.3
  • 1.93.0
  • 1.93.1
  • 1.93.2
  • 1.93.3
  • 1.93.4
  • 1.93.5
  • CROSS_PLATFORM_SEPT_2017
  • PRE_CAS_MERGE_JULY_2018
  • bst-1-branchpoint
  • bst-1.2-branchpoint
  • bst-1.4-branchpoint
144 results

Target

Select target project
  • willsalmon/buildstream
  • CumHoleZH/buildstream
  • tchaik/buildstream
  • DCotyPortfolio/buildstream
  • jesusoctavioas/buildstream
  • patrickmmartin/buildstream
  • franred/buildstream
  • tintou/buildstream
  • alatiera/buildstream
  • martinblanchard/buildstream
  • neverdie22042524/buildstream
  • Mattlk13/buildstream
  • PServers/buildstream
  • phamnghia610909/buildstream
  • chiaratolentino/buildstream
  • eysz7-x-x/buildstream
  • kerrick1/buildstream
  • matthew-yates/buildstream
  • twofeathers/buildstream
  • mhadjimichael/buildstream
  • pointswaves/buildstream
  • Mr.JackWilson/buildstream
  • Tw3akG33k/buildstream
  • AlexFazakas/buildstream
  • eruidfkiy/buildstream
  • clamotion2/buildstream
  • nanonyme/buildstream
  • wickyjaaa/buildstream
  • nmanchev/buildstream
  • bojorquez.ja/buildstream
  • mostynb/buildstream
  • highpit74/buildstream
  • Demo112/buildstream
  • ba2014sheer/buildstream
  • tonimadrino/buildstream
  • usuario2o/buildstream
  • Angelika123456/buildstream
  • neo355/buildstream
  • corentin-ferlay/buildstream
  • coldtom/buildstream
  • wifitvbox81/buildstream
  • 358253885/buildstream
  • seanborg/buildstream
  • SotK/buildstream
  • DouglasWinship/buildstream
  • karansthr97/buildstream
  • louib/buildstream
  • bwh-ct/buildstream
  • robjh/buildstream
  • we88c0de/buildstream
  • zhengxian5555/buildstream
51 results
Select Git revision
  • 108-integration-tests-not-idempotent-and-self-contained
  • 131-behavior-of-except-argument-is-frustrating-and-confusing
  • 132-loading-external-plugins-works-without-explicit-requirement-in-project-conf
  • 135-expire-artifacts-in-local-cache
  • 135-expire-artifacts-in-local-cache-clean
  • 138-aborting-bst-push-command-causes-stack-trace-3
  • 142-potentially-printing-provenance-more-than-once-in-loaderrors
  • 188-trigger-external-commands-on-certain-events
  • 214-filter-workspacing-rework
  • 218-allow-specifying-the-chroot-binary-to-use-for-sandboxes-on-unix-platforms
  • 239-use-pylint-for-linting
  • 372-allow-queues-to-run-auxilliary-jobs-after-an-element-s-job-finishes
  • 380-untagged-bst
  • 463-make-dependency-type-default-to-build
  • 537-mirror-fallback-does-not-work-for-git
  • 64-clarify-about-plugins-importing-other-plugins
  • 716-add-example-with-build-directory-outside-of-source-directory
  • 716-add-example-with-build-directory-outside-of-source-directory-2
  • 81-non-empty-read-only-directories-not-handled-during-bst-build-and-others
  • BenjaminSchubert/fix-quota-tests
  • Qinusty/235-manifest
  • Qinusty/397
  • Qinusty/470-bst-track-yaml-indent
  • Qinusty/553-backport-1.2
  • Qinusty/663-missing-cache-key-workspace-open
  • Qinusty/backport-576
  • Qinusty/backport-skipped-562
  • Qinusty/gitlab-ci
  • Qinusty/gitlab-ci-duration
  • Qinusty/message-helpers
  • Qinusty/pytest_cache_gitignore
  • abderrahim/cached-failure
  • abderrahim/cachekey-strictrebuild
  • abderrahim/cleanup-speedup
  • abderrahim/makemaker
  • abderrahim/resolve-remotes
  • abderrahim/source-cache
  • abderrahim/stage-artifact-scriptelement
  • abderrahim/virtual-extract
  • adamjones/contributing
  • adamjones/contribution-guide
  • aevri/assert_no_unexpected_size_writes
  • aevri/casdprocessmanager2
  • aevri/check_spawn_ci_working
  • aevri/enable_spawn_ci_4
  • aevri/enable_spawn_ci_6
  • aevri/enable_spawn_ci_7
  • aevri/json_artifact_meta
  • aevri/picklable_jobs
  • aevri/plugin_venvs
  • aevri/provenance_scope
  • aevri/pylint_ignore_argsdiff
  • aevri/safe_noninteractive
  • aevri/win32
  • aevri/win32_minimal
  • aevri/win32_minimal_seemstowork_20190829
  • aevri/win32_receive_signals
  • aevri/win32_temptext
  • alexfazakas/add-bst-init-argument
  • alexfazakas/use-merge-trains
  • always-do-linting
  • another-segfault
  • becky/locally_downloaded_files
  • becky/shell_launch_errors
  • bschubert/add-isolated-tests
  • bschubert/isort
  • bschubert/merge-parent-child-job
  • bschubert/more-mypy
  • bschubert/no-multiprocessing-bak
  • bschubert/no-multiprocessing-full
  • bschubert/optimize-deps
  • bschubert/optimize-element-init
  • bschubert/optimize-loader-sorting
  • bschubert/optimize-mapping-node
  • bschubert/optimize-splits
  • bschubert/remove-multiline-switch-for-re
  • bschubert/remove-parent-child-pipe
  • bschubert/remove-pip-source
  • bschubert/standardize-source-tests
  • bschubert/test-plugins
  • bschubert/update-coverage
  • bst-1
  • bst-1.0
  • bst-1.2
  • bst-1.4
  • bst-pull
  • bst-push
  • buildbox-pre-will
  • cache-key-v0
  • caching_build_trees
  • cascache_timeouts
  • chandan/automate-pypi-release
  • chandan/cli-deps
  • chandan/contrib-dependencies
  • chandan/element-cache
  • chandan/enums
  • chandan/extras-require
  • chandan/macos-multiprocessing
  • chandan/moar-parallelism
  • chandan/moar-runners
  • 1.0.0
  • 1.0.1
  • 1.1.0
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.1.5
  • 1.1.6
  • 1.1.7
  • 1.2.0
  • 1.2.1
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.7
  • 1.2.8
  • 1.3.0
  • 1.3.1
  • 1.4.0
  • 1.4.1
  • 1.4.2
  • 1.4.3
  • 1.5.0
  • 1.5.1
  • 1.6.0
  • 1.6.1
  • 1.91.0
  • 1.91.1
  • 1.91.2
  • 1.91.3
  • 1.93.0
  • 1.93.1
  • 1.93.2
  • 1.93.3
  • 1.93.4
  • 1.93.5
  • CROSS_PLATFORM_SEPT_2017
  • PRE_CAS_MERGE_JULY_2018
  • bst-1-branchpoint
  • bst-1.2-branchpoint
  • bst-1.4-branchpoint
144 results
Show changes
Commits on Source (214)
Showing
with 1858 additions and 738 deletions
......@@ -4,11 +4,15 @@ include =
*/buildstream/*
omit =
# Omit profiling helper module
# Omit some internals
*/buildstream/_profile.py
*/buildstream/__main__.py
*/buildstream/_version.py
# Omit generated code
*/buildstream/_protos/*
*/.eggs/*
# Omit .tox directory
*/.tox/*
[report]
show_missing = True
......
......@@ -13,10 +13,12 @@ tests/**/*.pyc
integration-cache/
tmp
.coverage
.coverage-reports/
.coverage.*
.cache
.pytest_cache/
*.bst/
.tox/
# Pycache, in case buildstream is ran directly from within the source
# tree
......
image: buildstream/testsuite-debian:9-master-123-7ce6581b
image: buildstream/testsuite-debian:9-5da27168-32c47d1c
cache:
key: "$CI_JOB_NAME-"
......@@ -6,49 +6,14 @@ cache:
- cache/
stages:
- prepare
- test
- post
variables:
PYTEST_ADDOPTS: "--color=yes"
INTEGRATION_CACHE: "${CI_PROJECT_DIR}/cache/integration-cache"
TEST_COMMAND: 'python3 setup.py test --index-url invalid://uri --addopts --integration'
#####################################################
# Prepare stage #
#####################################################
# Create a source distribution
#
source_dist:
stage: prepare
script:
# Generate the source distribution tarball
#
- python3 setup.py sdist
- tar -ztf dist/*
- tarball=$(cd dist && echo $(ls *))
# Verify that the source distribution tarball can be installed correctly
#
- pip3 install dist/*.tar.gz
- bst --version
# unpack tarball as `dist/buildstream` directory
- |
cat > dist/unpack.sh << EOF
#!/bin/sh
tar -zxf ${tarball}
mv ${tarball%.tar.gz} buildstream
EOF
# Make our helpers executable
- chmod +x dist/unpack.sh
artifacts:
paths:
- dist/
TEST_COMMAND: "tox -- --color=yes --integration"
COVERAGE_PREFIX: "${CI_JOB_NAME}."
#####################################################
......@@ -60,54 +25,53 @@ source_dist:
.tests-template: &tests
stage: test
variables:
COVERAGE_DIR: coverage-linux
before_script:
# Diagnostics
- mount
- df -h
# Unpack
- cd dist && ./unpack.sh
- cd buildstream
script:
- useradd -Um buildstream
- chown -R buildstream:buildstream .
# Run the tests from the source distribution, We run as a simple
# user to test for permission issues
# Run the tests as a simple user to test for permission issues
- su buildstream -c "${TEST_COMMAND}"
after_script:
# Collect our reports
- mkdir -p ${COVERAGE_DIR}
- cp dist/buildstream/.coverage ${COVERAGE_DIR}/coverage."${CI_JOB_NAME}"
except:
- schedules
artifacts:
paths:
- ${COVERAGE_DIR}
- .coverage-reports
tests-debian-9:
image: buildstream/testsuite-debian:9-master-123-7ce6581b
image: buildstream/testsuite-debian:9-5da27168-32c47d1c
<<: *tests
tests-fedora-27:
image: buildstream/testsuite-fedora:27-master-123-7ce6581b
image: buildstream/testsuite-fedora:27-5da27168-32c47d1c
<<: *tests
tests-fedora-28:
image: buildstream/testsuite-fedora:28-master-123-7ce6581b
image: buildstream/testsuite-fedora:28-5da27168-32c47d1c
<<: *tests
tests-ubuntu-18.04:
image: buildstream/testsuite-ubuntu:18.04-master-123-7ce6581b
image: buildstream/testsuite-ubuntu:18.04-5da27168-32c47d1c
<<: *tests
tests-python-3.7-stretch:
image: buildstream/testsuite-python:3.7-stretch-a60f0c39
<<: *tests
variables:
# Note that we explicitly specify TOXENV in this case because this
# image has both 3.6 and 3.7 versions. python3.6 cannot be removed because
# some of our base dependencies declare it as their runtime dependency.
TOXENV: py37
overnight-fedora-28-aarch64:
image: buildstream/testsuite-fedora:aarch64-28-master-123-7ce6581b
image: buildstream/testsuite-fedora:aarch64-28-5da27168-32c47d1c
tags:
- aarch64
<<: *tests
......@@ -116,15 +80,20 @@ overnight-fedora-28-aarch64:
except: []
only:
- schedules
before_script:
# grpcio needs to be compiled from source on aarch64 so we additionally
# need a C++ compiler here.
# FIXME: Ideally this would be provided by the base image. This will be
# unblocked by https://gitlab.com/BuildStream/buildstream-docker-images/issues/34
- dnf install -y gcc-c++
tests-unix:
# Use fedora here, to a) run a test on fedora and b) ensure that we
# can get rid of ostree - this is not possible with debian-8
image: buildstream/testsuite-fedora:27-master-123-7ce6581b
image: buildstream/testsuite-fedora:27-5da27168-32c47d1c
<<: *tests
variables:
BST_FORCE_BACKEND: "unix"
COVERAGE_DIR: coverage-unix
script:
......@@ -137,10 +106,9 @@ tests-unix:
# Since the unix platform is required to run as root, no user change required
- ${TEST_COMMAND}
tests-fedora-missing-deps:
# Ensure that tests behave nicely while missing bwrap and ostree
image: buildstream/testsuite-fedora:28-master-123-7ce6581b
image: buildstream/testsuite-fedora:28-5da27168-32c47d1c
<<: *tests
script:
......@@ -155,23 +123,44 @@ tests-fedora-missing-deps:
- ${TEST_COMMAND}
tests-fedora-update-deps:
# Check if the tests pass after updating requirements to their latest
# allowed version.
allow_failure: true
image: buildstream/testsuite-fedora:28-5da27168-32c47d1c
<<: *tests
script:
- useradd -Um buildstream
- chown -R buildstream:buildstream .
- make --always-make --directory requirements
- cat requirements/*.txt
- su buildstream -c "${TEST_COMMAND}"
# Lint separately from testing
lint:
stage: test
before_script:
# Diagnostics
- python3 --version
script:
- tox -e lint
except:
- schedules
# Automatically build documentation for every commit, we want to know
# if building documentation fails even if we're not deploying it.
# Note: We still do not enforce a consistent installation of python3-sphinx,
# as it will significantly grow the backing image.
docs:
stage: test
variables:
BST_FORCE_SESSION_REBUILD: 1
script:
- export BST_SOURCE_CACHE="$(pwd)/cache/integration-cache/sources"
# Currently sphinx_rtd_theme does not support Sphinx >1.8, this breaks search functionality
- pip3 install sphinx==1.7.9
- pip3 install sphinx-click
- pip3 install sphinx_rtd_theme
- cd dist && ./unpack.sh && cd buildstream
- make BST_FORCE_SESSION_REBUILD=1 -C doc
- cd ../..
- mv dist/buildstream/doc/build/html public
- env BST_SOURCE_CACHE="$(pwd)/cache/integration-cache/sources" tox -e docs
- mv doc/build/html public
except:
- schedules
artifacts:
......@@ -182,8 +171,8 @@ docs:
stage: test
variables:
BST_EXT_URL: git+https://gitlab.com/BuildStream/bst-external.git
BST_EXT_REF: 573843768f4d297f85dc3067465b3c7519a8dcc3 # 0.7.0
FD_SDK_REF: 612f66e218445eee2b1a9d7dd27c9caba571612e # freedesktop-sdk-18.08.19-54-g612f66e2
BST_EXT_REF: 0.9.0-0-g63a19e8068bd777bd9cd59b1a9442f9749ea5a85
FD_SDK_REF: freedesktop-sdk-18.08.25-0-g250939d465d6dd7768a215f1fa59c4a3412fc337
before_script:
- |
mkdir -p "${HOME}/.config"
......@@ -191,7 +180,8 @@ docs:
scheduler:
fetchers: 2
EOF
- (cd dist && ./unpack.sh && cd buildstream && pip3 install .)
- pip3 install -r requirements/requirements.txt -r requirements/plugin-requirements.txt
- pip3 install --no-index .
- pip3 install --user -e ${BST_EXT_URL}@${BST_EXT_REF}#egg=bst_ext
- git clone https://gitlab.com/freedesktop-sdk/freedesktop-sdk.git
- git -C freedesktop-sdk checkout ${FD_SDK_REF}
......@@ -274,30 +264,28 @@ coverage:
stage: post
coverage: '/TOTAL +\d+ +\d+ +(\d+\.\d+)%/'
script:
- cd dist && ./unpack.sh && cd buildstream
- pip3 install --no-index .
- mkdir report
- cd report
- cp ../../../coverage-unix/coverage.* .
- cp ../../../coverage-linux/coverage.* .
- ls coverage.*
- coverage combine --rcfile=../.coveragerc -a coverage.*
- coverage report --rcfile=../.coveragerc -m
- cp -a .coverage-reports/ ./coverage-sources
- tox -e coverage
- cp -a .coverage-reports/ ./coverage-report
dependencies:
- tests-debian-9
- tests-fedora-27
- tests-fedora-28
- tests-fedora-missing-deps
- tests-ubuntu-18.04
- tests-unix
- source_dist
except:
- schedules
artifacts:
paths:
- coverage-sources/
- coverage-report/
# Deploy, only for merges which land on master branch.
#
pages:
stage: post
dependencies:
- source_dist
- docs
variables:
ACME_DIR: public/.well-known/acme-challenge
......
......@@ -553,7 +553,7 @@ One problem which arises from this is that we end up having symbols
which are *public* according to the :ref:`rules discussed in the previous section
<contributing_public_and_private>`, but must be hidden away from the
*"Public API Surface"*. For example, BuildStream internal classes need
to invoke methods on the ``Element`` and ``Source`` classes, wheras these
to invoke methods on the ``Element`` and ``Source`` classes, whereas these
methods need to be hidden from the *"Public API Surface"*.
This is where BuildStream deviates from the PEP-8 standard for public
......@@ -631,7 +631,7 @@ An element plugin will derive from Element by importing::
from buildstream import Element
When importing utilities specifically, dont import function names
When importing utilities specifically, don't import function names
from there, instead import the module itself::
from . import utils
......@@ -737,7 +737,7 @@ Abstract methods
~~~~~~~~~~~~~~~~
In BuildStream, an *"Abstract Method"* is a bit of a misnomer and does
not match up to how Python defines abstract methods, we need to seek out
a new nomanclature to refer to these methods.
a new nomenclature to refer to these methods.
In Python, an *"Abstract Method"* is a method which **must** be
implemented by a subclass, whereas all methods in Python can be
......@@ -960,7 +960,7 @@ possible, and avoid any cyclic relationships in modules.
For instance, the ``Source`` objects are owned by ``Element``
objects in the BuildStream data model, and as such the ``Element``
will delegate some activities to the ``Source`` objects in its
possesion. The ``Source`` objects should however never call functions
possession. The ``Source`` objects should however never call functions
on the ``Element`` object, nor should the ``Source`` object itself
have any understanding of what an ``Element`` is.
......@@ -1222,27 +1222,13 @@ For further information about using the reStructuredText with sphinx, please see
Building Docs
~~~~~~~~~~~~~
The documentation build is not integrated into the ``setup.py`` and is
difficult (or impossible) to do so, so there is a little bit of setup
you need to take care of first.
Before you can build the BuildStream documentation yourself, you need
to first install ``sphinx`` along with some additional plugins and dependencies,
using pip or some other mechanism::
# Install sphinx
pip3 install --user sphinx
# Install some sphinx extensions
pip3 install --user sphinx-click
pip3 install --user sphinx_rtd_theme
# Additional optional dependencies required
pip3 install --user arpy
Before you can build the docs, you will end to ensure that you have installed
the required :ref:`build dependencies <contributing_build_deps>` as mentioned
in the testing section above.
To build the documentation, just run the following::
make -C doc
tox -e docs
This will give you a ``doc/build/html`` directory with the html docs which
you can view in your browser locally to test.
......@@ -1260,9 +1246,10 @@ will make the docs build reuse already downloaded sources::
export BST_SOURCE_CACHE=~/.cache/buildstream/sources
To force rebuild session html while building the doc, simply build the docs like this::
To force rebuild session html while building the doc, simply run `tox` with the
``BST_FORCE_SESSION_REBUILD`` environment variable set, like so::
make BST_FORCE_SESSION_REBUILD=1 -C doc
env BST_FORCE_SESSION_REBUILD=1 tox -e docs
Man pages
......@@ -1378,7 +1365,7 @@ Structure of an example
'''''''''''''''''''''''
The :ref:`tutorial <tutorial>` and the :ref:`examples <examples>` sections
of the documentation contain a series of sample projects, each chapter in
the tutoral, or standalone example uses a sample project.
the tutorial, or standalone example uses a sample project.
Here is the the structure for adding new examples and tutorial chapters.
......@@ -1468,63 +1455,159 @@ regenerate them locally in order to build the docs.
Testing
-------
BuildStream uses pytest for regression tests and testing out
the behavior of newly added components.
BuildStream uses `tox <https://tox.readthedocs.org/>`_ as a frontend to run the
tests which are implemented using `pytest <https://pytest.org/>`_. We use
pytest for regression tests and testing out the behavior of newly added
components.
The elaborate documentation for pytest can be found here: http://doc.pytest.org/en/latest/contents.html
Don't get lost in the docs if you don't need to, follow existing examples instead.
.. _contributing_build_deps:
Installing build dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some of BuildStream's dependencies have non-python build dependencies. When
running tests with ``tox``, you will first need to install these dependencies.
Exact steps to install these will depend on your operating system. Commands
for installing them for some common distributions are listed below.
For Fedora-based systems::
dnf install gcc pkg-config python3-devel cairo-gobject-devel glib2-devel gobject-introspection-devel
For Debian-based systems::
apt install gcc pkg-config python3-dev libcairo2-dev libgirepository1.0-dev
Running tests
~~~~~~~~~~~~~
To run the tests, just type::
To run the tests, simply navigate to the toplevel directory of your BuildStream
checkout and run::
./setup.py test
tox
By default, the test suite will be run against every supported python version
found on your host. If you have multiple python versions installed, you may
want to run tests against only one version and you can do that using the ``-e``
option when running tox::
At the toplevel.
tox -e py37
When debugging a test, it can be desirable to see the stdout
and stderr generated by a test, to do this use the ``--addopts``
function to feed arguments to pytest as such::
If you would like to test and lint at the same time, or if you do have multiple
python versions installed and would like to test against multiple versions, then
we recommend using `detox <https://github.com/tox-dev/detox>`_, just run it with
the same arguments you would give `tox`::
./setup.py test --addopts -s
detox -e lint,py36,py37
Linting is performed separately from testing. In order to run the linting step which
consists of running the ``pycodestyle`` and ``pylint`` tools, run the following::
tox -e lint
.. tip::
The project specific pylint and pycodestyle configurations are stored in the
toplevel buildstream directory in the ``.pylintrc`` file and ``setup.cfg`` files
respectively. These configurations can be interesting to use with IDEs and
other developer tooling.
The output of all failing tests will always be printed in the summary, but
if you want to observe the stdout and stderr generated by a passing test,
you can pass the ``-s`` option to pytest as such::
tox -- -s
.. tip::
The ``-s`` option is `a pytest option <https://docs.pytest.org/latest/usage.html>`_.
Any options specified before the ``--`` separator are consumed by ``tox``,
and any options after the ``--`` separator will be passed along to pytest.
You can always abort on the first failure by running::
./setup.py test --addopts -x
tox -- -x
Similarly, you may also be interested in the ``--last-failed`` and
``--failed-first`` options as per the
`pytest cache <https://docs.pytest.org/en/latest/cache.html>`_ documentation.
If you want to run a specific test or a group of tests, you
can specify a prefix to match. E.g. if you want to run all of
the frontend tests you can do::
./setup.py test --addopts 'tests/frontend/'
tox -- tests/frontend/
Specific tests can be chosen by using the :: delimeter after the test module.
Specific tests can be chosen by using the :: delimiter after the test module.
If you wanted to run the test_build_track test within frontend/buildtrack.py you could do::
./setup.py test --addopts 'tests/frontend/buildtrack.py::test_build_track'
tox -- tests/frontend/buildtrack.py::test_build_track
When running only a few tests, you may find the coverage and timing output
excessive, there are options to trim them. Note that coverage step will fail.
Here is an example::
tox -- --no-cov --durations=1 tests/frontend/buildtrack.py::test_build_track
We also have a set of slow integration tests that are disabled by
default - you will notice most of them marked with SKIP in the pytest
output. To run them, you can use::
./setup.py test --addopts '--integration'
tox -- --integration
By default, buildstream also runs pylint on all files. Should you want
to run just pylint (these checks are a lot faster), you can do so
with::
In case BuildStream's dependencies were updated since you last ran the
tests, you might see some errors like
``pytest: error: unrecognized arguments: --codestyle``. If this happens, you
will need to force ``tox`` to recreate the test environment(s). To do so, you
can run ``tox`` with ``-r`` or ``--recreate`` option.
./setup.py test --addopts '-m pylint'
.. note::
By default, we do not allow use of site packages in our ``tox``
configuration to enable running the tests in an isolated environment.
If you need to enable use of site packages for whatever reason, you can
do so by passing the ``--sitepackages`` option to ``tox``. Also, you will
not need to install any of the build dependencies mentioned above if you
use this approach.
.. note::
While using ``tox`` is practical for developers running tests in
more predictable execution environments, it is still possible to
execute the test suite against a specific installation environment
using pytest directly::
./setup.py test
Specific options can be passed to ``pytest`` using the ``--addopts``
option::
./setup.py test --addopts 'tests/frontend/buildtrack.py::test_build_track'
Observing coverage
~~~~~~~~~~~~~~~~~~
Once you have run the tests using `tox` (or `detox`), some coverage reports will
have been left behind.
To view the coverage report of the last test run, simply run::
Alternatively, any IDE plugin that uses pytest should automatically
detect the ``.pylintrc`` in the project's root directory.
tox -e coverage
This will collate any reports from separate python environments that may be
under test before displaying the combined coverage.
Adding tests
~~~~~~~~~~~~
Tests are found in the tests subdirectory, inside of which
there is a separarate directory for each *domain* of tests.
there is a separate directory for each *domain* of tests.
All tests are collected as::
tests/*/*.py
......@@ -1547,23 +1630,50 @@ Tests that run a sandbox should be decorated with::
and use the integration cli helper.
You should first aim to write tests that exercise your changes from the cli.
This is so that the testing is end-to-end, and the changes are guaranteed to
work for the end-user. The cli is considered stable, and so tests written in
terms of it are unlikely to require updating as the internals of the software
change over time.
You must test your changes in an end-to-end fashion. Consider the first end to
be the appropriate user interface, and the other end to be the change you have
made.
The aim for our tests is to make assertions about how you impact and define the
outward user experience. You should be able to exercise all code paths via the
user interface, just as one can test the strength of rivets by sailing dozens
of ocean liners. Keep in mind that your ocean liners could be sailing properly
*because* of a malfunctioning rivet. End-to-end testing will warn you that
fixing the rivet will sink the ships.
The primary user interface is the cli, so that should be the first target 'end'
for testing. Most of the value of BuildStream comes from what you can achieve
with the cli.
We also have what we call a *"Public API Surface"*, as previously mentioned in
:ref:`contributing_documenting_symbols`. You should consider this a secondary
target. This is mainly for advanced users to implement their plugins against.
Note that both of these targets for testing are guaranteed to continue working
in the same way across versions. This means that tests written in terms of them
will be robust to large changes to the code. This important property means that
BuildStream developers can make large refactorings without needing to rewrite
fragile tests.
Another user to consider is the BuildStream developer, therefore internal API
surfaces are also targets for testing. For example the YAML loading code, and
the CasCache. Remember that these surfaces are still just a means to the end of
providing value through the cli and the *"Public API Surface"*.
It may be impractical to sufficiently examine some changes in an end-to-end
fashion. The number of cases to test, and the running time of each test, may be
too high. Such typically low-level things, e.g. parsers, may also be tested
with unit tests; alongside the mandatory end-to-end tests.
It may be impractical to sufficiently examine some changes this way. For
example, the number of cases to test and the running time of each test may be
too high. It may also be difficult to contrive circumstances to cover every
line of the change. If this is the case, next you can consider also writing
unit tests that work more directly on the changes.
It is important to write unit tests that are not fragile, i.e. in such a way
that they do not break due to changes unrelated to what they are meant to test.
For example, if the test relies on a lot of BuildStream internals, a large
refactoring will likely require the test to be rewritten. Pure functions that
only rely on the Python Standard Library are excellent candidates for unit
testing.
It is important to write unit tests in such a way that they do not break due to
changes unrelated to what they are meant to test. For example, if the test
relies on a lot of BuildStream internals, a large refactoring will likely
require the test to be rewritten. Pure functions that only rely on the Python
Standard Library are excellent candidates for unit testing.
Unit tests only make it easier to implement things correctly, end-to-end tests
make it easier to implement the right thing.
Measuring performance
......@@ -1656,10 +1766,8 @@ obtain profiles::
ForceCommand BST_PROFILE=artifact-receive cd /tmp && bst-artifact-receive --pull-url https://example.com/ /home/artifacts/artifacts
The MANIFEST.in and setup.py
----------------------------
When adding a dependency to BuildStream, it's important to update the setup.py accordingly.
Managing data files
-------------------
When adding data files which need to be discovered at runtime by BuildStream, update setup.py accordingly.
When adding data files for the purpose of docs or tests, or anything that is not covered by
......@@ -1669,3 +1777,23 @@ At any time, running the following command to create a source distribution shoul
creating a tarball which contains everything we want it to include::
./setup.py sdist
Updating BuildStream's Python dependencies
------------------------------------------
BuildStream's Python dependencies are listed in multiple
`requirements files <https://pip.readthedocs.io/en/latest/reference/pip_install/#requirements-file-format>`
present in the ``requirements`` directory.
All ``.txt`` files in this directory are generated from the corresponding
``.in`` file, and each ``.in`` file represents a set of dependencies. For
example, ``requirements.in`` contains all runtime dependencies of BuildStream.
``requirements.txt`` is generated from it, and contains pinned versions of all
runtime dependencies (including transitive dependencies) of BuildStream.
When adding a new dependency to BuildStream, or updating existing dependencies,
it is important to update the appropriate requirements file accordingly. After
changing the ``.in`` file, run the following to update the matching ``.txt``
file::
make -C requirements
......@@ -24,6 +24,7 @@ recursive-include doc/sessions *.run
# Tests
recursive-include tests *
include conftest.py
include tox.ini
include .coveragerc
include .pylintrc
......@@ -31,7 +32,12 @@ include .pylintrc
recursive-include buildstream/_protos *.proto
# Requirements files
include dev-requirements.txt
include requirements/requirements.in
include requirements/requirements.txt
include requirements/dev-requirements.in
include requirements/dev-requirements.txt
include requirements/plugin-requirements.in
include requirements/plugin-requirements.txt
# Versioneer
include versioneer.py
......@@ -2,16 +2,28 @@
buildstream 1.3.1
=================
o Added `bst artifact log` subcommand for viewing build logs.
o BREAKING CHANGE: The bst source-bundle command has been removed. The
functionality it provided has been replaced by the `--include-build-scripts`
option of the `bst source-checkout` command. To produce a tarball containing
an element's sources and generated build scripts you can do the command
`bst source-checkout --include-build-scripts --tar foo.bst some-file.tar`
o BREAKING CHANGE: `bst track` and `bst fetch` commands are now osbolete.
Their functionality is provided by `bst source track` and
`bst source fetch` respectively.
o Added new `bst source checkout` command to checkout sources of an element.
o BREAKING CHANGE: Default strip-commands have been removed as they are too
specific. Recommendation if you are building in Linux is to use the
ones being used in freedesktop-sdk project, for example
o Running commands without elements specified will now attempt to use
the default element defined in the projcet configuration.
If no default element is defined, all elements in the project will be used
o All elements must now be suffixed with `.bst`
Attempting to use an element that does not have the `.bst` extension,
will result in a warning.
......@@ -22,6 +34,12 @@ buildstream 1.3.1
make changes to their .bst files if they are expecting these environment
variables to be set.
o BREAKING CHANGE: The 'auto-init' functionality has been removed. This would
offer to create a project in the event that bst was run against a directory
without a project, to be friendly to new users. It has been replaced with
an error message and a hint instead, to avoid bothering folks that just
made a mistake.
o Failed builds are included in the cache as well.
`bst checkout` will provide anything in `%{install-root}`.
A build including cached fails will cause any dependant elements
......@@ -59,8 +77,8 @@ buildstream 1.3.1
instead of just a specially-formatted build-root with a `root` and `scratch`
subdirectory.
o The buildstream.conf file learned new 'prompt.auto-init',
'prompt.really-workspace-close-remove-dir', and
o The buildstream.conf file learned new
'prompt.really-workspace-close-remove-dir' and
'prompt.really-workspace-reset-hard' options. These allow users to suppress
certain confirmation prompts, e.g. double-checking that the user meant to
run the command as typed.
......@@ -75,8 +93,6 @@ buildstream 1.3.1
with cached artifacts, only 'complete' elements can be pushed. If the element
is expected to have a populated build tree then it must be cached before pushing.
o Added new `bst source-checkout` command to checkout sources of an element.
o `bst workspace open` now supports the creation of multiple elements and
allows the user to set a default location for their creation. This has meant
that the new CLI is no longer backwards compatible with buildstream 1.2.
......
......@@ -16,6 +16,9 @@ About
.. image:: https://img.shields.io/pypi/v/BuildStream.svg
:target: https://pypi.org/project/BuildStream
.. image:: https://app.fossa.io/api/projects/git%2Bgitlab.com%2FBuildStream%2Fbuildstream.svg?type=shield
:target: https://app.fossa.io/projects/git%2Bgitlab.com%2FBuildStream%2Fbuildstream?ref=badge_shield
What is BuildStream?
====================
......
......@@ -34,3 +34,8 @@ if "_BST_COMPLETION" not in os.environ:
from .element import Element, ElementError
from .buildelement import BuildElement
from .scriptelement import ScriptElement
# XXX We are exposing a private member here as we expect it to move to a
# separate package soon. See the following discussion for more details:
# https://gitlab.com/BuildStream/buildstream/issues/739#note_124819869
from ._gitsourcebase import _GitSourceBase
......@@ -19,18 +19,16 @@
import multiprocessing
import os
import signal
import string
from collections.abc import Mapping
from ..types import _KeyStrength
from .._exceptions import ArtifactError, CASError, LoadError, LoadErrorReason
from .._message import Message, MessageType
from .. import _signals
from .. import utils
from .. import _yaml
from .types import _KeyStrength
from ._exceptions import ArtifactError, CASError, LoadError, LoadErrorReason
from ._message import Message, MessageType
from . import utils
from . import _yaml
from .cascache import CASRemote, CASRemoteSpec
from ._cas import CASRemote, CASRemoteSpec
CACHE_SIZE_FILE = "cache_size"
......@@ -249,7 +247,7 @@ class ArtifactCache():
# FIXME: Asking the user what to do may be neater
default_conf = os.path.join(os.environ['XDG_CONFIG_HOME'],
'buildstream.conf')
detail = ("There is not enough space to build the given element.\n"
detail = ("There is not enough space to complete the build.\n"
"Please increase the cache-quota in {}."
.format(self.context.config_origin or default_conf))
......@@ -375,20 +373,8 @@ class ArtifactCache():
remotes = {}
q = multiprocessing.Queue()
for remote_spec in remote_specs:
# Use subprocess to avoid creation of gRPC threads in main BuildStream process
# See https://github.com/grpc/grpc/blob/master/doc/fork_support.md for details
p = multiprocessing.Process(target=self.cas.initialize_remote, args=(remote_spec, q))
try:
# Keep SIGINT blocked in the child process
with _signals.blocked([signal.SIGINT], ignore=False):
p.start()
error = q.get()
p.join()
except KeyboardInterrupt:
utils._kill_process_tree(p.pid)
raise
error = CASRemote.check_remote(remote_spec, q)
if error and on_failure:
on_failure(remote_spec.url, error)
......@@ -747,7 +733,7 @@ class ArtifactCache():
"servers are configured as push remotes.")
for remote in push_remotes:
message_digest = self.cas.push_message(remote, message)
message_digest = remote.push_message(message)
return message_digest
......@@ -874,9 +860,7 @@ class ArtifactCache():
"\nValid values are, for example: 800M 10G 1T 50%\n"
.format(str(e))) from e
stat = os.statvfs(artifactdir_volume)
available_space = (stat.f_bsize * stat.f_bavail)
available_space, total_size = self._get_volume_space_info_for(artifactdir_volume)
cache_size = self.get_cache_size()
# Ensure system has enough storage for the cache_quota
......@@ -893,7 +877,7 @@ class ArtifactCache():
"BuildStream requires a minimum cache quota of 2G.")
elif cache_quota > cache_size + available_space: # Check maximum
if '%' in self.context.config_cache_quota:
available = (available_space / (stat.f_blocks * stat.f_bsize)) * 100
available = (available_space / total_size) * 100
available = '{}% of total disk space'.format(round(available, 1))
else:
available = utils._pretty_size(available_space)
......@@ -919,6 +903,20 @@ class ArtifactCache():
self._cache_quota = cache_quota - headroom
self._cache_lower_threshold = self._cache_quota / 2
# _get_volume_space_info_for
#
# Get the available space and total space for the given volume
#
# Args:
# volume: volume for which to get the size
#
# Returns:
# A tuple containing first the availabe number of bytes on the requested
# volume, then the total number of bytes of the volume.
def _get_volume_space_info_for(self, volume):
stat = os.statvfs(volume)
return stat.f_bsize * stat.f_bavail, stat.f_bsize * stat.f_blocks
# _configured_remote_artifact_cache_specs():
#
......
......@@ -17,4 +17,5 @@
# Authors:
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
from .artifactcache import ArtifactCache, ArtifactCacheSpec, CACHE_SIZE_FILE
from .cascache import CASCache
from .casremote import CASRemote, CASRemoteSpec
......@@ -17,83 +17,23 @@
# Authors:
# Jürg Billeter <juerg.billeter@codethink.co.uk>
from collections import namedtuple
import hashlib
import itertools
import io
import os
import stat
import tempfile
import uuid
import contextlib
from urllib.parse import urlparse
import grpc
from .._protos.google.rpc import code_pb2
from .._protos.google.bytestream import bytestream_pb2, bytestream_pb2_grpc
from .._protos.build.bazel.remote.execution.v2 import remote_execution_pb2, remote_execution_pb2_grpc
from .._protos.buildstream.v2 import buildstream_pb2, buildstream_pb2_grpc
from .._protos.build.bazel.remote.execution.v2 import remote_execution_pb2
from .._protos.buildstream.v2 import buildstream_pb2
from .. import utils
from .._exceptions import CASError, LoadError, LoadErrorReason
from .. import _yaml
from .._exceptions import CASCacheError
# The default limit for gRPC messages is 4 MiB.
# Limit payload to 1 MiB to leave sufficient headroom for metadata.
_MAX_PAYLOAD_BYTES = 1024 * 1024
class CASRemoteSpec(namedtuple('CASRemoteSpec', 'url push server_cert client_key client_cert')):
# _new_from_config_node
#
# Creates an CASRemoteSpec() from a YAML loaded node
#
@staticmethod
def _new_from_config_node(spec_node, basedir=None):
_yaml.node_validate(spec_node, ['url', 'push', 'server-cert', 'client-key', 'client-cert'])
url = _yaml.node_get(spec_node, str, 'url')
push = _yaml.node_get(spec_node, bool, 'push', default_value=False)
if not url:
provenance = _yaml.node_get_provenance(spec_node, 'url')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: empty artifact cache URL".format(provenance))
server_cert = _yaml.node_get(spec_node, str, 'server-cert', default_value=None)
if server_cert and basedir:
server_cert = os.path.join(basedir, server_cert)
client_key = _yaml.node_get(spec_node, str, 'client-key', default_value=None)
if client_key and basedir:
client_key = os.path.join(basedir, client_key)
client_cert = _yaml.node_get(spec_node, str, 'client-cert', default_value=None)
if client_cert and basedir:
client_cert = os.path.join(basedir, client_cert)
if client_key and not client_cert:
provenance = _yaml.node_get_provenance(spec_node, 'client-key')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: 'client-key' was specified without 'client-cert'".format(provenance))
if client_cert and not client_key:
provenance = _yaml.node_get_provenance(spec_node, 'client-cert')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: 'client-cert' was specified without 'client-key'".format(provenance))
return CASRemoteSpec(url, push, server_cert, client_key, client_cert)
CASRemoteSpec.__new__.__defaults__ = (None, None, None)
class BlobNotFound(CASError):
def __init__(self, blob, msg):
self.blob = blob
super().__init__(msg)
from .casremote import BlobNotFound, _CASBatchRead, _CASBatchUpdate
# A CASCache manages a CAS repository as specified in the Remote Execution API.
......@@ -118,7 +58,7 @@ class CASCache():
headdir = os.path.join(self.casdir, 'refs', 'heads')
objdir = os.path.join(self.casdir, 'objects')
if not (os.path.isdir(headdir) and os.path.isdir(objdir)):
raise CASError("CAS repository check failed for '{}'".format(self.casdir))
raise CASCacheError("CAS repository check failed for '{}'".format(self.casdir))
# contains():
#
......@@ -167,7 +107,7 @@ class CASCache():
# subdir (str): Optional specific dir to extract
#
# Raises:
# CASError: In cases there was an OSError, or if the ref did not exist.
# CASCacheError: In cases there was an OSError, or if the ref did not exist.
#
# Returns: path to extracted directory
#
......@@ -199,7 +139,7 @@ class CASCache():
# Another process beat us to rename
pass
except OSError as e:
raise CASError("Failed to extract directory for ref '{}': {}".format(ref, e)) from e
raise CASCacheError("Failed to extract directory for ref '{}': {}".format(ref, e)) from e
return originaldest
......@@ -243,29 +183,6 @@ class CASCache():
return modified, removed, added
def initialize_remote(self, remote_spec, q):
try:
remote = CASRemote(remote_spec)
remote.init()
request = buildstream_pb2.StatusRequest()
response = remote.ref_storage.Status(request)
if remote_spec.push and not response.allow_updates:
q.put('CAS server does not allow push')
else:
# No error
q.put(None)
except grpc.RpcError as e:
# str(e) is too verbose for errors reported to the user
q.put(e.details())
except Exception as e: # pylint: disable=broad-except
# Whatever happens, we need to return it to the calling process
#
q.put(str(e))
# pull():
#
# Pull a ref from a remote repository.
......@@ -284,7 +201,7 @@ class CASCache():
try:
remote.init()
request = buildstream_pb2.GetReferenceRequest()
request = buildstream_pb2.GetReferenceRequest(instance_name=remote.spec.instance_name)
request.key = ref
response = remote.ref_storage.GetReference(request)
......@@ -304,7 +221,7 @@ class CASCache():
return True
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.NOT_FOUND:
raise CASError("Failed to pull ref {}: {}".format(ref, e)) from e
raise CASCacheError("Failed to pull ref {}: {}".format(ref, e)) from e
else:
return False
except BlobNotFound as e:
......@@ -358,7 +275,7 @@ class CASCache():
# (bool): True if any remote was updated, False if no pushes were required
#
# Raises:
# (CASError): if there was an error
# (CASCacheError): if there was an error
#
def push(self, refs, remote):
skipped_remote = True
......@@ -369,7 +286,7 @@ class CASCache():
# Check whether ref is already on the server in which case
# there is no need to push the ref
try:
request = buildstream_pb2.GetReferenceRequest()
request = buildstream_pb2.GetReferenceRequest(instance_name=remote.spec.instance_name)
request.key = ref
response = remote.ref_storage.GetReference(request)
......@@ -384,7 +301,7 @@ class CASCache():
self._send_directory(remote, tree)
request = buildstream_pb2.UpdateReferenceRequest()
request = buildstream_pb2.UpdateReferenceRequest(instance_name=remote.spec.instance_name)
request.keys.append(ref)
request.digest.hash = tree.hash
request.digest.size_bytes = tree.size_bytes
......@@ -393,7 +310,7 @@ class CASCache():
skipped_remote = False
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.RESOURCE_EXHAUSTED:
raise CASError("Failed to push ref {}: {}".format(refs, e), temporary=True) from e
raise CASCacheError("Failed to push ref {}: {}".format(refs, e), temporary=True) from e
return not skipped_remote
......@@ -406,57 +323,13 @@ class CASCache():
# directory (Directory): A virtual directory object to push.
#
# Raises:
# (CASError): if there was an error
# (CASCacheError): if there was an error
#
def push_directory(self, remote, directory):
remote.init()
self._send_directory(remote, directory.ref)
# push_message():
#
# Push the given protobuf message to a remote.
#
# Args:
# remote (CASRemote): The remote to push to
# message (Message): A protobuf message to push.
#
# Raises:
# (CASError): if there was an error
#
def push_message(self, remote, message):
message_buffer = message.SerializeToString()
message_digest = utils._message_digest(message_buffer)
remote.init()
with io.BytesIO(message_buffer) as b:
self._send_blob(remote, message_digest, b)
return message_digest
# verify_digest_on_remote():
#
# Check whether the object is already on the server in which case
# there is no need to upload it.
#
# Args:
# remote (CASRemote): The remote to check
# digest (Digest): The object digest.
#
def verify_digest_on_remote(self, remote, digest):
remote.init()
request = remote_execution_pb2.FindMissingBlobsRequest()
request.blob_digests.extend([digest])
response = remote.cas.FindMissingBlobs(request)
if digest in response.missing_blob_digests:
return False
return True
# objpath():
#
# Return the path of an object based on its digest.
......@@ -529,7 +402,7 @@ class CASCache():
pass
except OSError as e:
raise CASError("Failed to hash object: {}".format(e)) from e
raise CASCacheError("Failed to hash object: {}".format(e)) from e
return digest
......@@ -570,7 +443,7 @@ class CASCache():
return digest
except FileNotFoundError as e:
raise CASError("Attempt to access unavailable ref: {}".format(e)) from e
raise CASCacheError("Attempt to access unavailable ref: {}".format(e)) from e
# update_mtime()
#
......@@ -583,7 +456,7 @@ class CASCache():
try:
os.utime(self._refpath(ref))
except FileNotFoundError as e:
raise CASError("Attempt to access unavailable ref: {}".format(e)) from e
raise CASCacheError("Attempt to access unavailable ref: {}".format(e)) from e
# calculate_cache_size()
#
......@@ -674,7 +547,7 @@ class CASCache():
# Remove cache ref
refpath = self._refpath(ref)
if not os.path.exists(refpath):
raise CASError("Could not find ref '{}'".format(ref))
raise CASCacheError("Could not find ref '{}'".format(ref))
os.unlink(refpath)
......@@ -790,7 +663,7 @@ class CASCache():
# The process serving the socket can't be cached anyway
pass
else:
raise CASError("Unsupported file type for {}".format(full_path))
raise CASCacheError("Unsupported file type for {}".format(full_path))
return self.add_object(digest=dir_digest,
buffer=directory.SerializeToString())
......@@ -809,7 +682,7 @@ class CASCache():
if dirnode.name == name:
return dirnode.digest
raise CASError("Subdirectory {} not found".format(name))
raise CASCacheError("Subdirectory {} not found".format(name))
def _diff_trees(self, tree_a, tree_b, *, added, removed, modified, path=""):
dir_a = remote_execution_pb2.Directory()
......@@ -907,17 +780,6 @@ class CASCache():
for dirnode in directory.directories:
yield from self._required_blobs(dirnode.digest)
def _fetch_blob(self, remote, digest, stream):
resource_name = '/'.join(['blobs', digest.hash, str(digest.size_bytes)])
request = bytestream_pb2.ReadRequest()
request.resource_name = resource_name
request.read_offset = 0
for response in remote.bytestream.Read(request):
stream.write(response.data)
stream.flush()
assert digest.size_bytes == os.fstat(stream.fileno()).st_size
# _ensure_blob():
#
# Fetch and add blob if it's not already local.
......@@ -936,7 +798,7 @@ class CASCache():
return objpath
with tempfile.NamedTemporaryFile(dir=self.tmpdir) as f:
self._fetch_blob(remote, digest, f)
remote._fetch_blob(digest, f)
added_digest = self.add_object(path=f.name, link_directly=True)
assert added_digest.hash == digest.hash
......@@ -1043,7 +905,7 @@ class CASCache():
def _fetch_tree(self, remote, digest):
# download but do not store the Tree object
with tempfile.NamedTemporaryFile(dir=self.tmpdir) as out:
self._fetch_blob(remote, digest, out)
remote._fetch_blob(digest, out)
tree = remote_execution_pb2.Tree()
......@@ -1063,41 +925,13 @@ class CASCache():
return dirdigest
def _send_blob(self, remote, digest, stream, u_uid=uuid.uuid4()):
resource_name = '/'.join(['uploads', str(u_uid), 'blobs',
digest.hash, str(digest.size_bytes)])
def request_stream(resname, instream):
offset = 0
finished = False
remaining = digest.size_bytes
while not finished:
chunk_size = min(remaining, _MAX_PAYLOAD_BYTES)
remaining -= chunk_size
request = bytestream_pb2.WriteRequest()
request.write_offset = offset
# max. _MAX_PAYLOAD_BYTES chunks
request.data = instream.read(chunk_size)
request.resource_name = resname
request.finish_write = remaining <= 0
yield request
offset += chunk_size
finished = request.finish_write
response = remote.bytestream.Write(request_stream(resource_name, stream))
assert response.committed_size == digest.size_bytes
def _send_directory(self, remote, digest, u_uid=uuid.uuid4()):
required_blobs = self._required_blobs(digest)
missing_blobs = dict()
# Limit size of FindMissingBlobs request
for required_blobs_group in _grouper(required_blobs, 512):
request = remote_execution_pb2.FindMissingBlobsRequest()
request = remote_execution_pb2.FindMissingBlobsRequest(instance_name=remote.spec.instance_name)
for required_digest in required_blobs_group:
d = request.blob_digests.add()
......@@ -1124,7 +958,7 @@ class CASCache():
if (digest.size_bytes >= remote.max_batch_total_size_bytes or
not remote.batch_update_supported):
# Too large for batch request, upload in independent request.
self._send_blob(remote, digest, f, u_uid=u_uid)
remote._send_blob(digest, f, u_uid=u_uid)
else:
if not batch.add(digest, f):
# Not enough space left in batch request.
......@@ -1137,183 +971,6 @@ class CASCache():
batch.send()
# Represents a single remote CAS cache.
#
class CASRemote():
def __init__(self, spec):
self.spec = spec
self._initialized = False
self.channel = None
self.bytestream = None
self.cas = None
self.ref_storage = None
self.batch_update_supported = None
self.batch_read_supported = None
self.capabilities = None
self.max_batch_total_size_bytes = None
def init(self):
if not self._initialized:
url = urlparse(self.spec.url)
if url.scheme == 'http':
port = url.port or 80
self.channel = grpc.insecure_channel('{}:{}'.format(url.hostname, port))
elif url.scheme == 'https':
port = url.port or 443
if self.spec.server_cert:
with open(self.spec.server_cert, 'rb') as f:
server_cert_bytes = f.read()
else:
server_cert_bytes = None
if self.spec.client_key:
with open(self.spec.client_key, 'rb') as f:
client_key_bytes = f.read()
else:
client_key_bytes = None
if self.spec.client_cert:
with open(self.spec.client_cert, 'rb') as f:
client_cert_bytes = f.read()
else:
client_cert_bytes = None
credentials = grpc.ssl_channel_credentials(root_certificates=server_cert_bytes,
private_key=client_key_bytes,
certificate_chain=client_cert_bytes)
self.channel = grpc.secure_channel('{}:{}'.format(url.hostname, port), credentials)
else:
raise CASError("Unsupported URL: {}".format(self.spec.url))
self.bytestream = bytestream_pb2_grpc.ByteStreamStub(self.channel)
self.cas = remote_execution_pb2_grpc.ContentAddressableStorageStub(self.channel)
self.capabilities = remote_execution_pb2_grpc.CapabilitiesStub(self.channel)
self.ref_storage = buildstream_pb2_grpc.ReferenceStorageStub(self.channel)
self.max_batch_total_size_bytes = _MAX_PAYLOAD_BYTES
try:
request = remote_execution_pb2.GetCapabilitiesRequest()
response = self.capabilities.GetCapabilities(request)
server_max_batch_total_size_bytes = response.cache_capabilities.max_batch_total_size_bytes
if 0 < server_max_batch_total_size_bytes < self.max_batch_total_size_bytes:
self.max_batch_total_size_bytes = server_max_batch_total_size_bytes
except grpc.RpcError as e:
# Simply use the defaults for servers that don't implement GetCapabilities()
if e.code() != grpc.StatusCode.UNIMPLEMENTED:
raise
# Check whether the server supports BatchReadBlobs()
self.batch_read_supported = False
try:
request = remote_execution_pb2.BatchReadBlobsRequest()
response = self.cas.BatchReadBlobs(request)
self.batch_read_supported = True
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.UNIMPLEMENTED:
raise
# Check whether the server supports BatchUpdateBlobs()
self.batch_update_supported = False
try:
request = remote_execution_pb2.BatchUpdateBlobsRequest()
response = self.cas.BatchUpdateBlobs(request)
self.batch_update_supported = True
except grpc.RpcError as e:
if (e.code() != grpc.StatusCode.UNIMPLEMENTED and
e.code() != grpc.StatusCode.PERMISSION_DENIED):
raise
self._initialized = True
# Represents a batch of blobs queued for fetching.
#
class _CASBatchRead():
def __init__(self, remote):
self._remote = remote
self._max_total_size_bytes = remote.max_batch_total_size_bytes
self._request = remote_execution_pb2.BatchReadBlobsRequest()
self._size = 0
self._sent = False
def add(self, digest):
assert not self._sent
new_batch_size = self._size + digest.size_bytes
if new_batch_size > self._max_total_size_bytes:
# Not enough space left in current batch
return False
request_digest = self._request.digests.add()
request_digest.hash = digest.hash
request_digest.size_bytes = digest.size_bytes
self._size = new_batch_size
return True
def send(self):
assert not self._sent
self._sent = True
if not self._request.digests:
return
batch_response = self._remote.cas.BatchReadBlobs(self._request)
for response in batch_response.responses:
if response.status.code == code_pb2.NOT_FOUND:
raise BlobNotFound(response.digest.hash, "Failed to download blob {}: {}".format(
response.digest.hash, response.status.code))
if response.status.code != code_pb2.OK:
raise CASError("Failed to download blob {}: {}".format(
response.digest.hash, response.status.code))
if response.digest.size_bytes != len(response.data):
raise CASError("Failed to download blob {}: expected {} bytes, received {} bytes".format(
response.digest.hash, response.digest.size_bytes, len(response.data)))
yield (response.digest, response.data)
# Represents a batch of blobs queued for upload.
#
class _CASBatchUpdate():
def __init__(self, remote):
self._remote = remote
self._max_total_size_bytes = remote.max_batch_total_size_bytes
self._request = remote_execution_pb2.BatchUpdateBlobsRequest()
self._size = 0
self._sent = False
def add(self, digest, stream):
assert not self._sent
new_batch_size = self._size + digest.size_bytes
if new_batch_size > self._max_total_size_bytes:
# Not enough space left in current batch
return False
blob_request = self._request.requests.add()
blob_request.digest.hash = digest.hash
blob_request.digest.size_bytes = digest.size_bytes
blob_request.data = stream.read(digest.size_bytes)
self._size = new_batch_size
return True
def send(self):
assert not self._sent
self._sent = True
if not self._request.requests:
return
batch_response = self._remote.cas.BatchUpdateBlobs(self._request)
for response in batch_response.responses:
if response.status.code != code_pb2.OK:
raise CASError("Failed to upload blob {}: {}".format(
response.digest.hash, response.status.code))
def _grouper(iterable, n):
while True:
try:
......
from collections import namedtuple
import io
import os
import multiprocessing
import signal
from urllib.parse import urlparse
import uuid
import grpc
from .. import _yaml
from .._protos.google.rpc import code_pb2
from .._protos.google.bytestream import bytestream_pb2, bytestream_pb2_grpc
from .._protos.build.bazel.remote.execution.v2 import remote_execution_pb2, remote_execution_pb2_grpc
from .._protos.buildstream.v2 import buildstream_pb2, buildstream_pb2_grpc
from .._exceptions import CASRemoteError, LoadError, LoadErrorReason
from .. import _signals
from .. import utils
# The default limit for gRPC messages is 4 MiB.
# Limit payload to 1 MiB to leave sufficient headroom for metadata.
_MAX_PAYLOAD_BYTES = 1024 * 1024
class CASRemoteSpec(namedtuple('CASRemoteSpec', 'url push server_cert client_key client_cert instance_name')):
# _new_from_config_node
#
# Creates an CASRemoteSpec() from a YAML loaded node
#
@staticmethod
def _new_from_config_node(spec_node, basedir=None):
_yaml.node_validate(spec_node, ['url', 'push', 'server-cert', 'client-key', 'client-cert', 'instance_name'])
url = _yaml.node_get(spec_node, str, 'url')
push = _yaml.node_get(spec_node, bool, 'push', default_value=False)
if not url:
provenance = _yaml.node_get_provenance(spec_node, 'url')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: empty artifact cache URL".format(provenance))
instance_name = _yaml.node_get(spec_node, str, 'server-cert', default_value=None)
server_cert = _yaml.node_get(spec_node, str, 'server-cert', default_value=None)
if server_cert and basedir:
server_cert = os.path.join(basedir, server_cert)
client_key = _yaml.node_get(spec_node, str, 'client-key', default_value=None)
if client_key and basedir:
client_key = os.path.join(basedir, client_key)
client_cert = _yaml.node_get(spec_node, str, 'client-cert', default_value=None)
if client_cert and basedir:
client_cert = os.path.join(basedir, client_cert)
if client_key and not client_cert:
provenance = _yaml.node_get_provenance(spec_node, 'client-key')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: 'client-key' was specified without 'client-cert'".format(provenance))
if client_cert and not client_key:
provenance = _yaml.node_get_provenance(spec_node, 'client-cert')
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}: 'client-cert' was specified without 'client-key'".format(provenance))
return CASRemoteSpec(url, push, server_cert, client_key, client_cert, instance_name)
CASRemoteSpec.__new__.__defaults__ = (None, None, None, None)
class BlobNotFound(CASRemoteError):
def __init__(self, blob, msg):
self.blob = blob
super().__init__(msg)
# Represents a single remote CAS cache.
#
class CASRemote():
def __init__(self, spec):
self.spec = spec
self._initialized = False
self.channel = None
self.bytestream = None
self.cas = None
self.ref_storage = None
self.batch_update_supported = None
self.batch_read_supported = None
self.capabilities = None
self.max_batch_total_size_bytes = None
def init(self):
if not self._initialized:
url = urlparse(self.spec.url)
if url.scheme == 'http':
port = url.port or 80
self.channel = grpc.insecure_channel('{}:{}'.format(url.hostname, port))
elif url.scheme == 'https':
port = url.port or 443
if self.spec.server_cert:
with open(self.spec.server_cert, 'rb') as f:
server_cert_bytes = f.read()
else:
server_cert_bytes = None
if self.spec.client_key:
with open(self.spec.client_key, 'rb') as f:
client_key_bytes = f.read()
else:
client_key_bytes = None
if self.spec.client_cert:
with open(self.spec.client_cert, 'rb') as f:
client_cert_bytes = f.read()
else:
client_cert_bytes = None
credentials = grpc.ssl_channel_credentials(root_certificates=server_cert_bytes,
private_key=client_key_bytes,
certificate_chain=client_cert_bytes)
self.channel = grpc.secure_channel('{}:{}'.format(url.hostname, port), credentials)
else:
raise CASRemoteError("Unsupported URL: {}".format(self.spec.url))
self.bytestream = bytestream_pb2_grpc.ByteStreamStub(self.channel)
self.cas = remote_execution_pb2_grpc.ContentAddressableStorageStub(self.channel)
self.capabilities = remote_execution_pb2_grpc.CapabilitiesStub(self.channel)
self.ref_storage = buildstream_pb2_grpc.ReferenceStorageStub(self.channel)
self.max_batch_total_size_bytes = _MAX_PAYLOAD_BYTES
try:
request = remote_execution_pb2.GetCapabilitiesRequest()
response = self.capabilities.GetCapabilities(request)
server_max_batch_total_size_bytes = response.cache_capabilities.max_batch_total_size_bytes
if 0 < server_max_batch_total_size_bytes < self.max_batch_total_size_bytes:
self.max_batch_total_size_bytes = server_max_batch_total_size_bytes
except grpc.RpcError as e:
# Simply use the defaults for servers that don't implement GetCapabilities()
if e.code() != grpc.StatusCode.UNIMPLEMENTED:
raise
# Check whether the server supports BatchReadBlobs()
self.batch_read_supported = False
try:
request = remote_execution_pb2.BatchReadBlobsRequest()
response = self.cas.BatchReadBlobs(request)
self.batch_read_supported = True
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.UNIMPLEMENTED:
raise
# Check whether the server supports BatchUpdateBlobs()
self.batch_update_supported = False
try:
request = remote_execution_pb2.BatchUpdateBlobsRequest()
response = self.cas.BatchUpdateBlobs(request)
self.batch_update_supported = True
except grpc.RpcError as e:
if (e.code() != grpc.StatusCode.UNIMPLEMENTED and
e.code() != grpc.StatusCode.PERMISSION_DENIED):
raise
self._initialized = True
# check_remote
#
# Used when checking whether remote_specs work in the buildstream main
# thread, runs this in a seperate process to avoid creation of gRPC threads
# in the main BuildStream process
# See https://github.com/grpc/grpc/blob/master/doc/fork_support.md for details
@classmethod
def check_remote(cls, remote_spec, q):
def __check_remote():
try:
remote = cls(remote_spec)
remote.init()
request = buildstream_pb2.StatusRequest()
response = remote.ref_storage.Status(request)
if remote_spec.push and not response.allow_updates:
q.put('CAS server does not allow push')
else:
# No error
q.put(None)
except grpc.RpcError as e:
# str(e) is too verbose for errors reported to the user
q.put(e.details())
except Exception as e: # pylint: disable=broad-except
# Whatever happens, we need to return it to the calling process
#
q.put(str(e))
p = multiprocessing.Process(target=__check_remote)
try:
# Keep SIGINT blocked in the child process
with _signals.blocked([signal.SIGINT], ignore=False):
p.start()
error = q.get()
p.join()
except KeyboardInterrupt:
utils._kill_process_tree(p.pid)
raise
return error
# verify_digest_on_remote():
#
# Check whether the object is already on the server in which case
# there is no need to upload it.
#
# Args:
# digest (Digest): The object digest.
#
def verify_digest_on_remote(self, digest):
self.init()
request = remote_execution_pb2.FindMissingBlobsRequest()
request.blob_digests.extend([digest])
response = self.cas.FindMissingBlobs(request)
if digest in response.missing_blob_digests:
return False
return True
# push_message():
#
# Push the given protobuf message to a remote.
#
# Args:
# message (Message): A protobuf message to push.
#
# Raises:
# (CASRemoteError): if there was an error
#
def push_message(self, message):
message_buffer = message.SerializeToString()
message_digest = utils._message_digest(message_buffer)
self.init()
with io.BytesIO(message_buffer) as b:
self._send_blob(message_digest, b)
return message_digest
################################################
# Local Private Methods #
################################################
def _fetch_blob(self, digest, stream):
resource_name = '/'.join(['blobs', digest.hash, str(digest.size_bytes)])
request = bytestream_pb2.ReadRequest()
request.resource_name = resource_name
request.read_offset = 0
for response in self.bytestream.Read(request):
stream.write(response.data)
stream.flush()
assert digest.size_bytes == os.fstat(stream.fileno()).st_size
def _send_blob(self, digest, stream, u_uid=uuid.uuid4()):
resource_name = '/'.join(['uploads', str(u_uid), 'blobs',
digest.hash, str(digest.size_bytes)])
def request_stream(resname, instream):
offset = 0
finished = False
remaining = digest.size_bytes
while not finished:
chunk_size = min(remaining, _MAX_PAYLOAD_BYTES)
remaining -= chunk_size
request = bytestream_pb2.WriteRequest()
request.write_offset = offset
# max. _MAX_PAYLOAD_BYTES chunks
request.data = instream.read(chunk_size)
request.resource_name = resname
request.finish_write = remaining <= 0
yield request
offset += chunk_size
finished = request.finish_write
response = self.bytestream.Write(request_stream(resource_name, stream))
assert response.committed_size == digest.size_bytes
# Represents a batch of blobs queued for fetching.
#
class _CASBatchRead():
def __init__(self, remote):
self._remote = remote
self._max_total_size_bytes = remote.max_batch_total_size_bytes
self._request = remote_execution_pb2.BatchReadBlobsRequest()
self._size = 0
self._sent = False
def add(self, digest):
assert not self._sent
new_batch_size = self._size + digest.size_bytes
if new_batch_size > self._max_total_size_bytes:
# Not enough space left in current batch
return False
request_digest = self._request.digests.add()
request_digest.hash = digest.hash
request_digest.size_bytes = digest.size_bytes
self._size = new_batch_size
return True
def send(self):
assert not self._sent
self._sent = True
if not self._request.digests:
return
batch_response = self._remote.cas.BatchReadBlobs(self._request)
for response in batch_response.responses:
if response.status.code == code_pb2.NOT_FOUND:
raise BlobNotFound(response.digest.hash, "Failed to download blob {}: {}".format(
response.digest.hash, response.status.code))
if response.status.code != code_pb2.OK:
raise CASRemoteError("Failed to download blob {}: {}".format(
response.digest.hash, response.status.code))
if response.digest.size_bytes != len(response.data):
raise CASRemoteError("Failed to download blob {}: expected {} bytes, received {} bytes".format(
response.digest.hash, response.digest.size_bytes, len(response.data)))
yield (response.digest, response.data)
# Represents a batch of blobs queued for upload.
#
class _CASBatchUpdate():
def __init__(self, remote):
self._remote = remote
self._max_total_size_bytes = remote.max_batch_total_size_bytes
self._request = remote_execution_pb2.BatchUpdateBlobsRequest()
self._size = 0
self._sent = False
def add(self, digest, stream):
assert not self._sent
new_batch_size = self._size + digest.size_bytes
if new_batch_size > self._max_total_size_bytes:
# Not enough space left in current batch
return False
blob_request = self._request.requests.add()
blob_request.digest.hash = digest.hash
blob_request.digest.size_bytes = digest.size_bytes
blob_request.data = stream.read(digest.size_bytes)
self._size = new_batch_size
return True
def send(self):
assert not self._sent
self._sent = True
if not self._request.requests:
return
batch_response = self._remote.cas.BatchUpdateBlobs(self._request)
for response in batch_response.responses:
if response.status.code != code_pb2.OK:
raise CASRemoteError("Failed to upload blob {}: {}".format(
response.digest.hash, response.status.code))
......@@ -27,8 +27,8 @@ import uuid
import errno
import threading
import click
import grpc
import click
from .._protos.build.bazel.remote.execution.v2 import remote_execution_pb2, remote_execution_pb2_grpc
from .._protos.google.bytestream import bytestream_pb2, bytestream_pb2_grpc
......
......@@ -31,9 +31,10 @@ from ._exceptions import LoadError, LoadErrorReason, BstError
from ._message import Message, MessageType
from ._profile import Topics, profile_start, profile_end
from ._artifactcache import ArtifactCache
from ._artifactcache.cascache import CASCache
from ._workspaces import Workspaces, WorkspaceProjectCache, WORKSPACE_PROJECT_FILE
from ._cas import CASCache
from ._workspaces import Workspaces, WorkspaceProjectCache
from .plugin import _plugin_lookup
from .sandbox import SandboxRemote
# Context()
......@@ -72,6 +73,9 @@ class Context():
# The locations from which to push and pull prebuilt artifacts
self.artifact_cache_specs = None
# The global remote execution configuration
self.remote_execution_specs = None
# The directory to store build logs
self.logdir = None
......@@ -117,10 +121,6 @@ class Context():
# Whether or not to attempt to pull build trees globally
self.pull_buildtrees = None
# Boolean, whether to offer to create a project for the user, if we are
# invoked outside of a directory where we can resolve the project.
self.prompt_auto_init = None
# Boolean, whether we double-check with the user that they meant to
# remove a workspace directory.
self.prompt_workspace_close_remove_dir = None
......@@ -191,7 +191,7 @@ class Context():
_yaml.node_validate(defaults, [
'sourcedir', 'builddir', 'artifactdir', 'logdir',
'scheduler', 'artifacts', 'logging', 'projects',
'cache', 'prompt', 'workspacedir',
'cache', 'prompt', 'workspacedir', 'remote-execution'
])
for directory in ['sourcedir', 'builddir', 'artifactdir', 'logdir', 'workspacedir']:
......@@ -216,6 +216,8 @@ class Context():
# Load artifact share configuration
self.artifact_cache_specs = ArtifactCache.specs_from_config_node(defaults)
self.remote_execution_specs = SandboxRemote.specs_from_config_node(defaults)
# Load pull build trees configuration
self.pull_buildtrees = _yaml.node_get(cache, bool, 'pull-buildtrees')
......@@ -258,12 +260,10 @@ class Context():
prompt = _yaml.node_get(
defaults, Mapping, 'prompt')
_yaml.node_validate(prompt, [
'auto-init', 'really-workspace-close-remove-dir',
'really-workspace-close-remove-dir',
'really-workspace-close-project-inaccessible',
'really-workspace-reset-hard',
])
self.prompt_auto_init = _node_get_option_str(
prompt, 'auto-init', ['ask', 'no']) == 'ask'
self.prompt_workspace_close_remove_dir = _node_get_option_str(
prompt, 'really-workspace-close-remove-dir', ['ask', 'yes']) == 'ask'
self.prompt_workspace_close_project_inaccessible = _node_get_option_str(
......@@ -277,7 +277,8 @@ class Context():
# Shallow validation of overrides, parts of buildstream which rely
# on the overrides are expected to validate elsewhere.
for _, overrides in _yaml.node_items(self._project_overrides):
_yaml.node_validate(overrides, ['artifacts', 'options', 'strict', 'default-mirror'])
_yaml.node_validate(overrides, ['artifacts', 'options', 'strict', 'default-mirror',
'remote-execution'])
profile_end(Topics.LOAD_CONTEXT, 'load')
......@@ -316,11 +317,18 @@ class Context():
# invoked with as opposed to a junctioned subproject.
#
# Returns:
# (list): The list of projects
# (Project): The Project object
#
def get_toplevel_project(self):
return self._projects[0]
# get_workspaces():
#
# Return a Workspaces object containing a list of workspaces.
#
# Returns:
# (Workspaces): The Workspaces object
#
def get_workspaces(self):
return self._workspaces
......@@ -649,20 +657,6 @@ class Context():
self._cascache = CASCache(self.artifactdir)
return self._cascache
# guess_element()
#
# Attempts to interpret which element the user intended to run commands on
#
# Returns:
# (str) The name of the element, or None if no element can be guessed
def guess_element(self):
workspace_project_dir, _ = utils._search_upward_for_files(self._directory, [WORKSPACE_PROJECT_FILE])
if workspace_project_dir:
workspace_project = self._workspace_project_cache.get(workspace_project_dir)
return workspace_project.get_default_element()
else:
return None
# _node_get_option_str()
#
......
......@@ -262,8 +262,8 @@ class PlatformError(BstError):
# Raised when errors are encountered by the sandbox implementation
#
class SandboxError(BstError):
def __init__(self, message, reason=None):
super().__init__(message, domain=ErrorDomain.SANDBOX, reason=reason)
def __init__(self, message, detail=None, reason=None):
super().__init__(message, detail=detail, domain=ErrorDomain.SANDBOX, reason=reason)
# ArtifactError
......@@ -284,6 +284,21 @@ class CASError(BstError):
super().__init__(message, detail=detail, domain=ErrorDomain.CAS, reason=reason, temporary=True)
# CASRemoteError
#
# Raised when errors are encountered in the remote CAS
class CASRemoteError(CASError):
pass
# CASCacheError
#
# Raised when errors are encountered in the local CASCacheError
#
class CASCacheError(CASError):
pass
# PipelineError
#
# Raised from pipeline operations
......
......@@ -38,7 +38,7 @@ from .._message import Message, MessageType, unconditional_messages
from .._stream import Stream
from .._versions import BST_FORMAT_VERSION
from .. import _yaml
from .._scheduler import ElementJob
from .._scheduler import ElementJob, JobStatus
# Import frontend assets
from . import Profile, LogLine, Status
......@@ -219,13 +219,13 @@ class App():
default_mirror=self._main_options.get('default_mirror'))
except LoadError as e:
# Let's automatically start a `bst init` session in this case
if e.reason == LoadErrorReason.MISSING_PROJECT_CONF and self.interactive:
click.echo("A project was not detected in the directory: {}".format(directory), err=True)
if self.context.prompt_auto_init:
# Help users that are new to BuildStream by suggesting 'init'.
# We don't want to slow down users that just made a mistake, so
# don't stop them with an offer to create a project for them.
if e.reason == LoadErrorReason.MISSING_PROJECT_CONF:
click.echo("No project found. You can create a new project like so:", err=True)
click.echo("", err=True)
if click.confirm("Would you like to create a new project here?"):
self.init_project(None)
click.echo(" bst init", err=True)
self._error_exit(e, "Error loading project")
......@@ -515,13 +515,13 @@ class App():
self._status.add_job(job)
self._maybe_render_status()
def _job_completed(self, job, success):
def _job_completed(self, job, status):
self._status.remove_job(job)
self._maybe_render_status()
# Dont attempt to handle a failure if the user has already opted to
# terminate
if not success and not self.stream.terminated:
if status == JobStatus.FAIL and not self.stream.terminated:
if isinstance(job, ElementJob):
element = job.element
......@@ -599,7 +599,7 @@ class App():
click.echo("\nDropping into an interactive shell in the failed build sandbox\n", err=True)
try:
prompt = self.shell_prompt(element)
self.stream.shell(element, Scope.BUILD, prompt, isolate=True)
self.stream.shell(element, Scope.BUILD, prompt, isolate=True, usebuildtree=True)
except BstError as e:
click.echo("Error while attempting to create interactive shell: {}".format(e), err=True)
elif choice == 'log':
......
import os
import sys
from contextlib import ExitStack
from fnmatch import fnmatch
from functools import partial
from tempfile import TemporaryDirectory
import click
from .. import _yaml
......@@ -46,7 +50,8 @@ def search_command(args, *, context=None):
def complete_commands(cmd, args, incomplete):
command_ctx = search_command(args[1:])
if command_ctx and command_ctx.command and isinstance(command_ctx.command, click.MultiCommand):
return [subcommand + " " for subcommand in command_ctx.command.list_commands(command_ctx)]
return [subcommand + " " for subcommand in command_ctx.command.list_commands(command_ctx)
if not command_ctx.command.get_command(command_ctx, subcommand).hidden]
return []
......@@ -107,8 +112,37 @@ def complete_target(args, incomplete):
return complete_list
def override_completions(cmd, cmd_param, args, incomplete):
def complete_artifact(orig_args, args, incomplete):
from .._context import Context
ctx = Context()
config = None
if orig_args:
for i, arg in enumerate(orig_args):
if arg in ('-c', '--config'):
try:
config = orig_args[i + 1]
except IndexError:
pass
if args:
for i, arg in enumerate(args):
if arg in ('-c', '--config'):
try:
config = args[i + 1]
except IndexError:
pass
ctx.load(config)
# element targets are valid artifact names
complete_list = complete_target(args, incomplete)
complete_list.extend(ref for ref in ctx.artifactcache.cas.list_refs() if ref.startswith(incomplete))
return complete_list
def override_completions(orig_args, cmd, cmd_param, args, incomplete):
"""
:param orig_args: original, non-completion args
:param cmd_param: command definition
:param args: full list of args typed before the incomplete arg
:param incomplete: the incomplete text to autocomplete
......@@ -121,13 +155,15 @@ def override_completions(cmd, cmd_param, args, incomplete):
# We can't easily extend click's data structures without
# modifying click itself, so just do some weak special casing
# right here and select which parameters we want to handle specially.
if isinstance(cmd_param.type, click.Path) and \
(cmd_param.name == 'elements' or
if isinstance(cmd_param.type, click.Path):
if (cmd_param.name == 'elements' or
cmd_param.name == 'element' or
cmd_param.name == 'except_' or
cmd_param.opts == ['--track'] or
cmd_param.opts == ['--track-except']):
return complete_target(args, incomplete)
if cmd_param.name == 'artifacts':
return complete_artifact(orig_args, args, incomplete)
raise CompleteUnhandled()
......@@ -138,7 +174,7 @@ def override_main(self, args=None, prog_name=None, complete_var=None,
# Hook for the Bash completion. This only activates if the Bash
# completion is actually enabled, otherwise this is quite a fast
# noop.
if main_bashcomplete(self, prog_name, override_completions):
if main_bashcomplete(self, prog_name, partial(override_completions, args)):
# If we're running tests we cant just go calling exit()
# from the main process.
......@@ -306,7 +342,11 @@ def init(app, project_name, format_version, element_path, force):
type=click.Path(readable=False))
@click.pass_obj
def build(app, elements, all_, track_, track_save, track_all, track_except, track_cross_junctions):
"""Build elements in a pipeline"""
"""Build elements in a pipeline
Declaring no elements with result in building a default element if one is declared in the project configuration.
If no default is declared, all elements in the project will be built"""
if (track_except or track_cross_junctions) and not (track_ or track_all):
click.echo("ERROR: The --track-except and --track-cross-junctions options "
......@@ -318,9 +358,7 @@ def build(app, elements, all_, track_, track_save, track_all, track_except, trac
with app.initialized(session_name="Build"):
if not all_ and not elements:
guessed_target = app.context.guess_element()
if guessed_target:
elements = (guessed_target,)
elements = app.project.get_default_elements()
if track_all:
track_ = elements
......@@ -332,106 +370,6 @@ def build(app, elements, all_, track_, track_save, track_all, track_except, trac
build_all=all_)
##################################################################
# Fetch Command #
##################################################################
@cli.command(short_help="Fetch sources in a pipeline")
@click.option('--except', 'except_', multiple=True,
type=click.Path(readable=False),
help="Except certain dependencies from fetching")
@click.option('--deps', '-d', default='plan',
type=click.Choice(['none', 'plan', 'all']),
help='The dependencies to fetch (default: plan)')
@click.option('--track', 'track_', default=False, is_flag=True,
help="Track new source references before fetching")
@click.option('--track-cross-junctions', '-J', default=False, is_flag=True,
help="Allow tracking to cross junction boundaries")
@click.argument('elements', nargs=-1,
type=click.Path(readable=False))
@click.pass_obj
def fetch(app, elements, deps, track_, except_, track_cross_junctions):
"""Fetch sources required to build the pipeline
By default this will only try to fetch sources which are
required for the build plan of the specified target element,
omitting sources for any elements which are already built
and available in the artifact cache.
Specify `--deps` to control which sources to fetch:
\b
none: No dependencies, just the element itself
plan: Only dependencies required for the build plan
all: All dependencies
"""
from .._pipeline import PipelineSelection
if track_cross_junctions and not track_:
click.echo("ERROR: The --track-cross-junctions option can only be used with --track", err=True)
sys.exit(-1)
if track_ and deps == PipelineSelection.PLAN:
click.echo("WARNING: --track specified for tracking of a build plan\n\n"
"Since tracking modifies the build plan, all elements will be tracked.", err=True)
deps = PipelineSelection.ALL
with app.initialized(session_name="Fetch"):
if not elements:
guessed_target = app.context.guess_element()
if guessed_target:
elements = (guessed_target,)
app.stream.fetch(elements,
selection=deps,
except_targets=except_,
track_targets=track_,
track_cross_junctions=track_cross_junctions)
##################################################################
# Track Command #
##################################################################
@cli.command(short_help="Track new source references")
@click.option('--except', 'except_', multiple=True,
type=click.Path(readable=False),
help="Except certain dependencies from tracking")
@click.option('--deps', '-d', default='none',
type=click.Choice(['none', 'all']),
help='The dependencies to track (default: none)')
@click.option('--cross-junctions', '-J', default=False, is_flag=True,
help="Allow crossing junction boundaries")
@click.argument('elements', nargs=-1,
type=click.Path(readable=False))
@click.pass_obj
def track(app, elements, deps, except_, cross_junctions):
"""Consults the specified tracking branches for new versions available
to build and updates the project with any newly available references.
By default this will track just the specified element, but you can also
update a whole tree of dependencies in one go.
Specify `--deps` to control which sources to track:
\b
none: No dependencies, just the specified elements
all: All dependencies of all specified elements
"""
with app.initialized(session_name="Track"):
if not elements:
guessed_target = app.context.guess_element()
if guessed_target:
elements = (guessed_target,)
# Substitute 'none' for 'redirect' so that element redirections
# will be done
if deps == 'none':
deps = 'redirect'
app.stream.track(elements,
selection=deps,
except_targets=except_,
cross_junctions=cross_junctions)
##################################################################
# Pull Command #
##################################################################
......@@ -447,6 +385,10 @@ def track(app, elements, deps, except_, cross_junctions):
def pull(app, elements, deps, remote):
"""Pull a built artifact from the configured remote artifact cache.
Declaring no elements with result in pulling a default element if one is declared in the project configuration.
If no default is declared, all elements in the project will be pulled
By default the artifact will be pulled one of the configured caches
if possible, following the usual priority order. If the `--remote` flag
is given, only the specified cache will be queried.
......@@ -460,9 +402,7 @@ def pull(app, elements, deps, remote):
with app.initialized(session_name="Pull"):
if not elements:
guessed_target = app.context.guess_element()
if guessed_target:
elements = (guessed_target,)
elements = app.project.get_default_elements()
app.stream.pull(elements, selection=deps, remote=remote)
......@@ -482,6 +422,10 @@ def pull(app, elements, deps, remote):
def push(app, elements, deps, remote):
"""Push a built artifact to a remote artifact cache.
Declaring no elements with result in push a default element if one is declared in the project configuration.
If no default is declared, all elements in the project will be pushed
The default destination is the highest priority configured cache. You can
override this by passing a different cache URL with the `--remote` flag.
......@@ -497,9 +441,7 @@ def push(app, elements, deps, remote):
"""
with app.initialized(session_name="Push"):
if not elements:
guessed_target = app.context.guess_element()
if guessed_target:
elements = (guessed_target,)
elements = app.project.get_default_elements()
app.stream.push(elements, selection=deps, remote=remote)
......@@ -526,6 +468,10 @@ def push(app, elements, deps, remote):
def show(app, elements, deps, except_, order, format_):
"""Show elements in the pipeline
Declaring no elements with result in showing a default element if one is declared in the project configuration.
If no default is declared, all elements in the project will be shown
By default this will show all of the dependencies of the
specified target element.
......@@ -570,11 +516,10 @@ def show(app, elements, deps, except_, order, format_):
bst show target.bst --format \\
$'---------- %{name} ----------\\n%{vars}'
"""
with app.initialized():
if not elements:
guessed_target = app.context.guess_element()
if guessed_target:
elements = (guessed_target,)
elements = app.project.get_default_elements()
dependencies = app.stream.load_selection(elements,
selection=deps,
......@@ -604,13 +549,18 @@ def show(app, elements, deps, except_, order, format_):
help="Mount a file or directory into the sandbox")
@click.option('--isolate', is_flag=True, default=False,
help='Create an isolated build sandbox')
@click.option('--use-buildtree', '-t', 'cli_buildtree', type=click.Choice(['ask', 'try', 'always', 'never']),
default='ask',
help='Defaults to ask but if set to always the function will fail if a build tree is not available')
@click.argument('element', required=False,
type=click.Path(readable=False))
@click.argument('command', type=click.STRING, nargs=-1)
@click.pass_obj
def shell(app, element, sysroot, mount, isolate, build_, command):
def shell(app, element, sysroot, mount, isolate, build_, cli_buildtree, command):
"""Run a command in the target element's sandbox environment
Declaring no elements with result in opening a default element if one is declared in the project configuration.
This will stage a temporary sysroot for running the target
element, assuming it has already been built and all required
artifacts are in the local cache.
......@@ -634,11 +584,14 @@ def shell(app, element, sysroot, mount, isolate, build_, command):
else:
scope = Scope.RUN
use_buildtree = False
with app.initialized():
if not element:
element = app.context.guess_element()
if not element:
elements = app.project.get_default_elements(search_project=False)
if not elements:
raise AppError('Missing argument "ELEMENT".')
element = elements[0]
dependencies = app.stream.load_selection((element,), selection=PipelineSelection.NONE)
element = dependencies[0]
......@@ -647,12 +600,30 @@ def shell(app, element, sysroot, mount, isolate, build_, command):
HostMount(path, host_path)
for host_path, path in mount
]
cached = element._cached_buildtree()
if cli_buildtree == "always":
if cached:
use_buildtree = True
else:
raise AppError("No buildtree is cached but the use buildtree option was specified")
elif cli_buildtree == "never":
pass
elif cli_buildtree == "try":
use_buildtree = cached
else:
if app.interactive and cached:
use_buildtree = bool(click.confirm('Do you want to use the cached buildtree?'))
if use_buildtree and not element._cached_success():
click.echo("Warning: using a buildtree from a failed build.")
try:
exitcode = app.stream.shell(element, scope, prompt,
directory=sysroot,
mounts=mounts,
isolate=isolate,
command=command)
command=command,
usebuildtree=use_buildtree)
except BstError as e:
raise AppError("Error launching shell: {}".format(e), detail=e.detail) from e
......@@ -683,6 +654,9 @@ def shell(app, element, sysroot, mount, isolate, build_, command):
@click.pass_obj
def checkout(app, element, location, force, deps, integrate, hardlinks, tar):
"""Checkout a built artifact to the specified location
Declaring no elements with result in checking out a default element
if one is declared in the project configuration.
"""
from ..element import Scope
......@@ -708,9 +682,10 @@ def checkout(app, element, location, force, deps, integrate, hardlinks, tar):
with app.initialized():
if not element:
element = app.context.guess_element()
if not element:
elements = app.project.get_default_elements(search_project=False)
if not elements:
raise AppError('Missing argument "ELEMENT".')
element = elements[0]
app.stream.checkout(element,
location=location,
......@@ -721,10 +696,122 @@ def checkout(app, element, location, force, deps, integrate, hardlinks, tar):
tar=tar)
##################################################################
# Source Command #
##################################################################
@cli.group(short_help="Manipulate sources for an element")
def source():
"""Manipulate sources for an element"""
##################################################################
# Source Fetch Command #
##################################################################
@source.command(name="fetch", short_help="Fetch sources in a pipeline")
@click.option('--except', 'except_', multiple=True,
type=click.Path(readable=False),
help="Except certain dependencies from fetching")
@click.option('--deps', '-d', default='plan',
type=click.Choice(['none', 'plan', 'all']),
help='The dependencies to fetch (default: plan)')
@click.option('--track', 'track_', default=False, is_flag=True,
help="Track new source references before fetching")
@click.option('--track-cross-junctions', '-J', default=False, is_flag=True,
help="Allow tracking to cross junction boundaries")
@click.argument('elements', nargs=-1,
type=click.Path(readable=False))
@click.pass_obj
def source_fetch(app, elements, deps, track_, except_, track_cross_junctions):
"""Fetch sources required to build the pipeline
declaring no elements with result in fetching a default element if one is declared in the project configuration.
if no default is declared, all elements in the project will be fetched
By default this will only try to fetch sources which are
required for the build plan of the specified target element,
omitting sources for any elements which are already built
and available in the artifact cache.
Specify `--deps` to control which sources to fetch:
\b
none: No dependencies, just the element itself
plan: Only dependencies required for the build plan
all: All dependencies
"""
from .._pipeline import PipelineSelection
if track_cross_junctions and not track_:
click.echo("ERROR: The --track-cross-junctions option can only be used with --track", err=True)
sys.exit(-1)
if track_ and deps == PipelineSelection.PLAN:
click.echo("WARNING: --track specified for tracking of a build plan\n\n"
"Since tracking modifies the build plan, all elements will be tracked.", err=True)
deps = PipelineSelection.ALL
with app.initialized(session_name="Fetch"):
if not elements:
elements = app.project.get_default_elements()
app.stream.fetch(elements,
selection=deps,
except_targets=except_,
track_targets=track_,
track_cross_junctions=track_cross_junctions)
##################################################################
# Source Track Command #
##################################################################
@source.command(name="track", short_help="Track new source references")
@click.option('--except', 'except_', multiple=True,
type=click.Path(readable=False),
help="Except certain dependencies from tracking")
@click.option('--deps', '-d', default='none',
type=click.Choice(['none', 'all']),
help='The dependencies to track (default: none)')
@click.option('--cross-junctions', '-J', default=False, is_flag=True,
help="Allow crossing junction boundaries")
@click.argument('elements', nargs=-1,
type=click.Path(readable=False))
@click.pass_obj
def source_track(app, elements, deps, except_, cross_junctions):
"""Consults the specified tracking branches for new versions available
to build and updates the project with any newly available references.
Declaring no elements with result in tracking a default element if one is declared in the project configuration.
If no default is declared, all elements in the project will be tracked
By default this will track just the specified element, but you can also
update a whole tree of dependencies in one go.
Specify `--deps` to control which sources to track:
\b
none: No dependencies, just the specified elements
all: All dependencies of all specified elements
"""
with app.initialized(session_name="Track"):
if not elements:
elements = app.project.get_default_elements()
# Substitute 'none' for 'redirect' so that element redirections
# will be done
if deps == 'none':
deps = 'redirect'
app.stream.track(elements,
selection=deps,
except_targets=except_,
cross_junctions=cross_junctions)
##################################################################
# Source Checkout Command #
##################################################################
@cli.command(name='source-checkout', short_help='Checkout sources for an element')
@source.command(name='checkout', short_help='Checkout sources for an element')
@click.option('--force', '-f', default=False, is_flag=True,
help="Allow files to be overwritten")
@click.option('--except', 'except_', multiple=True,
......@@ -745,6 +832,9 @@ def checkout(app, element, location, force, deps, integrate, hardlinks, tar):
def source_checkout(app, element, location, force, deps, fetch_, except_,
tar, build_scripts):
"""Checkout sources of an element to the specified location
Declaring no elements with result in checking out the source of
a default element if one is declared in the project configuration.
"""
if not element and not location:
click.echo("ERROR: LOCATION is not specified", err=True)
......@@ -757,9 +847,10 @@ def source_checkout(app, element, location, force, deps, fetch_, except_,
with app.initialized():
if not element:
element = app.context.guess_element()
if not element:
elements = app.project.get_default_elements(search_project=False)
if not elements:
raise AppError('Missing argument "ELEMENT".')
element = elements[0]
app.stream.source_checkout(element,
location=location,
......@@ -777,7 +868,6 @@ def source_checkout(app, element, location, force, deps, fetch_, except_,
@cli.group(short_help="Manipulate developer workspaces")
def workspace():
"""Manipulate developer workspaces"""
pass
##################################################################
......@@ -824,10 +914,11 @@ def workspace_close(app, remove_dir, all_, elements):
if not (all_ or elements):
# NOTE: I may need to revisit this when implementing multiple projects
# opening one workspace.
element = app.context.guess_element()
if element:
elements = (element,)
else:
elements = app.project.get_default_elements(
search_project=False, use_project_defaults=False
)
if not elements:
raise AppError('No elements specified')
# Early exit if we specified `all` and there are no workspaces
......@@ -885,10 +976,11 @@ def workspace_reset(app, soft, track_, all_, elements):
with app.initialized():
if not (all_ or elements):
element = app.context.guess_element()
if element:
elements = (element,)
else:
elements = app.project.get_default_elements(
search_project=False, use_project_defaults=False
)
if not elements:
raise AppError('No elements specified to reset')
if all_ and not app.stream.workspace_exists():
......@@ -915,3 +1007,151 @@ def workspace_list(app):
with app.initialized():
app.stream.workspace_list()
#############################################################
# Artifact Commands #
#############################################################
def _classify_artifacts(names, cas, project_directory):
element_targets = []
artifact_refs = []
element_globs = []
artifact_globs = []
for name in names:
if name.endswith('.bst'):
if any(c in "*?[" for c in name):
element_globs.append(name)
else:
element_targets.append(name)
else:
if any(c in "*?[" for c in name):
artifact_globs.append(name)
else:
artifact_refs.append(name)
if element_globs:
for dirpath, _, filenames in os.walk(project_directory):
for filename in filenames:
element_path = os.path.join(dirpath, filename).lstrip(project_directory).lstrip('/')
if any(fnmatch(element_path, glob) for glob in element_globs):
element_targets.append(element_path)
if artifact_globs:
artifact_refs.extend(ref for ref in cas.list_refs()
if any(fnmatch(ref, glob) for glob in artifact_globs))
return element_targets, artifact_refs
@cli.group(short_help="Manipulate cached artifacts")
def artifact():
"""Manipulate cached artifacts"""
################################################################
# Artifact Log Command #
################################################################
@artifact.command(name='log', short_help="Show logs of an artifact")
@click.argument('artifacts', type=click.Path(), nargs=-1)
@click.pass_obj
def artifact_log(app, artifacts):
"""Show logs of all artifacts"""
from .._exceptions import CASError
from .._message import MessageType
from .._pipeline import PipelineSelection
from ..storage._casbaseddirectory import CasBasedDirectory
with ExitStack() as stack:
stack.enter_context(app.initialized())
cache = app.context.artifactcache
elements, artifacts = _classify_artifacts(artifacts, cache.cas,
app.project.directory)
vdirs = []
extractdirs = []
if artifacts:
for ref in artifacts:
try:
cache_id = cache.cas.resolve_ref(ref, update_mtime=True)
vdir = CasBasedDirectory(cache.cas, cache_id)
vdirs.append(vdir)
except CASError as e:
app._message(MessageType.WARN, "Artifact {} is not cached".format(ref), detail=str(e))
continue
if elements:
elements = app.stream.load_selection(elements, selection=PipelineSelection.NONE)
for element in elements:
if not element._cached():
app._message(MessageType.WARN, "Element {} is not cached".format(element))
continue
ref = cache.get_artifact_fullname(element, element._get_cache_key())
cache_id = cache.cas.resolve_ref(ref, update_mtime=True)
vdir = CasBasedDirectory(cache.cas, cache_id)
vdirs.append(vdir)
for vdir in vdirs:
# NOTE: If reading the logs feels unresponsive, here would be a good place to provide progress information.
logsdir = vdir.descend(["logs"])
td = stack.enter_context(TemporaryDirectory())
logsdir.export_files(td, can_link=True)
extractdirs.append(td)
for extractdir in extractdirs:
for log in (os.path.join(extractdir, log) for log in os.listdir(extractdir)):
# NOTE: Should click gain the ability to pass files to the pager this can be optimised.
with open(log) as f:
data = f.read()
click.echo_via_pager(data)
##################################################################
# DEPRECATED Commands #
##################################################################
# XXX: The following commands are now obsolete, but they are kept
# here along with all the options so that we can provide nice error
# messages when they are called.
# Also, note that these commands are hidden from the top-level help.
##################################################################
# Fetch Command #
##################################################################
@cli.command(short_help="Fetch sources in a pipeline", hidden=True)
@click.option('--except', 'except_', multiple=True,
type=click.Path(readable=False),
help="Except certain dependencies from fetching")
@click.option('--deps', '-d', default='plan',
type=click.Choice(['none', 'plan', 'all']),
help='The dependencies to fetch (default: plan)')
@click.option('--track', 'track_', default=False, is_flag=True,
help="Track new source references before fetching")
@click.option('--track-cross-junctions', '-J', default=False, is_flag=True,
help="Allow tracking to cross junction boundaries")
@click.argument('elements', nargs=-1,
type=click.Path(readable=False))
@click.pass_obj
def fetch(app, elements, deps, track_, except_, track_cross_junctions):
click.echo("This command is now obsolete. Use `bst source fetch` instead.", err=True)
sys.exit(1)
##################################################################
# Track Command #
##################################################################
@cli.command(short_help="Track new source references", hidden=True)
@click.option('--except', 'except_', multiple=True,
type=click.Path(readable=False),
help="Except certain dependencies from tracking")
@click.option('--deps', '-d', default='none',
type=click.Choice(['none', 'all']),
help='The dependencies to track (default: none)')
@click.option('--cross-junctions', '-J', default=False, is_flag=True,
help="Allow crossing junction boundaries")
@click.argument('elements', nargs=-1,
type=click.Path(readable=False))
@click.pass_obj
def track(app, elements, deps, except_, cross_junctions):
click.echo("This command is now obsolete. Use `bst source track` instead.", err=True)
sys.exit(1)
......@@ -31,7 +31,7 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
import collections
import collections.abc
import copy
import os
......@@ -203,7 +203,7 @@ def is_incomplete_option(all_args, cmd_param):
if start_of_option(arg_str):
last_option = arg_str
return True if last_option and last_option in cmd_param.opts else False
return bool(last_option and last_option in cmd_param.opts)
def is_incomplete_argument(current_params, cmd_param):
......@@ -218,7 +218,7 @@ def is_incomplete_argument(current_params, cmd_param):
return True
if cmd_param.nargs == -1:
return True
if isinstance(current_param_values, collections.Iterable) \
if isinstance(current_param_values, collections.abc.Iterable) \
and cmd_param.nargs > 1 and len(current_param_values) < cmd_param.nargs:
return True
return False
......@@ -297,12 +297,15 @@ def get_choices(cli, prog_name, args, incomplete, override):
if not found_param and isinstance(ctx.command, MultiCommand):
# completion for any subcommands
choices.extend([cmd + " " for cmd in ctx.command.list_commands(ctx)])
choices.extend([cmd + " " for cmd in ctx.command.list_commands(ctx)
if not ctx.command.get_command(ctx, cmd).hidden])
if not start_of_option(incomplete) and ctx.parent is not None \
and isinstance(ctx.parent.command, MultiCommand) and ctx.parent.command.chain:
# completion for chained commands
remaining_comands = set(ctx.parent.command.list_commands(ctx.parent)) - set(ctx.parent.protected_args)
visible_commands = [cmd for cmd in ctx.parent.command.list_commands(ctx.parent)
if not ctx.parent.command.get_command(ctx.parent, cmd).hidden]
remaining_comands = set(visible_commands) - set(ctx.parent.protected_args)
choices.extend([cmd + " " for cmd in remaining_comands])
for item in choices:
......
......@@ -23,8 +23,8 @@ from contextlib import ExitStack
from mmap import mmap
import re
import textwrap
import click
from ruamel import yaml
import click
from . import Profile
from .. import Element, Consistency
......
#
# Copyright (C) 2016 Codethink Limited
# Copyright (C) 2018 Bloomberg Finance LP
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library. If not, see <http://www.gnu.org/licenses/>.
#
# Authors:
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
# Chandan Singh <csingh43@bloomberg.net>
"""Abstract base class for source implementations that work with a Git repository"""
import os
import re
import shutil
from collections.abc import Mapping
from io import StringIO
from tempfile import TemporaryFile
from configparser import RawConfigParser
from buildstream import Source, SourceError, Consistency, SourceFetcher, CoreWarnings
from buildstream import utils
from buildstream.utils import move_atomic, DirectoryExistsError
GIT_MODULES = '.gitmodules'
# Warnings
WARN_INCONSISTENT_SUBMODULE = "inconsistent-submodule"
WARN_UNLISTED_SUBMODULE = "unlisted-submodule"
WARN_INVALID_SUBMODULE = "invalid-submodule"
# Because of handling of submodules, we maintain a GitMirror
# for the primary git source and also for each submodule it
# might have at a given time
#
class GitMirror(SourceFetcher):
def __init__(self, source, path, url, ref, *, primary=False, tags=[]):
super().__init__()
self.source = source
self.path = path
self.url = url
self.ref = ref
self.tags = tags
self.primary = primary
self.mirror = os.path.join(source.get_mirror_directory(), utils.url_directory_name(url))
self.mark_download_url(url)
# Ensures that the mirror exists
def ensure(self, alias_override=None):
# Unfortunately, git does not know how to only clone just a specific ref,
# so we have to download all of those gigs even if we only need a couple
# of bytes.
if not os.path.exists(self.mirror):
# Do the initial clone in a tmpdir just because we want an atomic move
# after a long standing clone which could fail overtime, for now do
# this directly in our git directory, eliminating the chances that the
# system configured tmpdir is not on the same partition.
#
with self.source.tempdir() as tmpdir:
url = self.source.translate_url(self.url, alias_override=alias_override,
primary=self.primary)
self.source.call([self.source.host_git, 'clone', '--mirror', '-n', url, tmpdir],
fail="Failed to clone git repository {}".format(url),
fail_temporarily=True)
try:
move_atomic(tmpdir, self.mirror)
except DirectoryExistsError:
# Another process was quicker to download this repository.
# Let's discard our own
self.source.status("{}: Discarding duplicate clone of {}"
.format(self.source, url))
except OSError as e:
raise SourceError("{}: Failed to move cloned git repository {} from '{}' to '{}': {}"
.format(self.source, url, tmpdir, self.mirror, e)) from e
def _fetch(self, alias_override=None):
url = self.source.translate_url(self.url,
alias_override=alias_override,
primary=self.primary)
if alias_override:
remote_name = utils.url_directory_name(alias_override)
_, remotes = self.source.check_output(
[self.source.host_git, 'remote'],
fail="Failed to retrieve list of remotes in {}".format(self.mirror),
cwd=self.mirror
)
if remote_name not in remotes:
self.source.call(
[self.source.host_git, 'remote', 'add', remote_name, url],
fail="Failed to add remote {} with url {}".format(remote_name, url),
cwd=self.mirror
)
else:
remote_name = "origin"
self.source.call([self.source.host_git, 'fetch', remote_name, '--prune', '--force', '--tags'],
fail="Failed to fetch from remote git repository: {}".format(url),
fail_temporarily=True,
cwd=self.mirror)
def fetch(self, alias_override=None):
# Resolve the URL for the message
resolved_url = self.source.translate_url(self.url,
alias_override=alias_override,
primary=self.primary)
with self.source.timed_activity("Fetching from {}"
.format(resolved_url),
silent_nested=True):
self.ensure(alias_override)
if not self.has_ref():
self._fetch(alias_override)
self.assert_ref()
def has_ref(self):
if not self.ref:
return False
# If the mirror doesnt exist, we also dont have the ref
if not os.path.exists(self.mirror):
return False
# Check if the ref is really there
rc = self.source.call([self.source.host_git, 'cat-file', '-t', self.ref], cwd=self.mirror)
return rc == 0
def assert_ref(self):
if not self.has_ref():
raise SourceError("{}: expected ref '{}' was not found in git repository: '{}'"
.format(self.source, self.ref, self.url))
def latest_commit_with_tags(self, tracking, track_tags=False):
_, output = self.source.check_output(
[self.source.host_git, 'rev-parse', tracking],
fail="Unable to find commit for specified branch name '{}'".format(tracking),
cwd=self.mirror)
ref = output.rstrip('\n')
if self.source.ref_format == 'git-describe':
# Prefix the ref with the closest tag, if available,
# to make the ref human readable
exit_code, output = self.source.check_output(
[self.source.host_git, 'describe', '--tags', '--abbrev=40', '--long', ref],
cwd=self.mirror)
if exit_code == 0:
ref = output.rstrip('\n')
if not track_tags:
return ref, []
tags = set()
for options in [[], ['--first-parent'], ['--tags'], ['--tags', '--first-parent']]:
exit_code, output = self.source.check_output(
[self.source.host_git, 'describe', '--abbrev=0', ref] + options,
cwd=self.mirror)
if exit_code == 0:
tag = output.strip()
_, commit_ref = self.source.check_output(
[self.source.host_git, 'rev-parse', tag + '^{commit}'],
fail="Unable to resolve tag '{}'".format(tag),
cwd=self.mirror)
exit_code = self.source.call(
[self.source.host_git, 'cat-file', 'tag', tag],
cwd=self.mirror)
annotated = (exit_code == 0)
tags.add((tag, commit_ref.strip(), annotated))
return ref, list(tags)
def stage(self, directory):
fullpath = os.path.join(directory, self.path)
# Using --shared here avoids copying the objects into the checkout, in any
# case we're just checking out a specific commit and then removing the .git/
# directory.
self.source.call([self.source.host_git, 'clone', '--no-checkout', '--shared', self.mirror, fullpath],
fail="Failed to create git mirror {} in directory: {}".format(self.mirror, fullpath),
fail_temporarily=True)
self.source.call([self.source.host_git, 'checkout', '--force', self.ref],
fail="Failed to checkout git ref {}".format(self.ref),
cwd=fullpath)
# Remove .git dir
shutil.rmtree(os.path.join(fullpath, ".git"))
self._rebuild_git(fullpath)
def init_workspace(self, directory):
fullpath = os.path.join(directory, self.path)
url = self.source.translate_url(self.url)
self.source.call([self.source.host_git, 'clone', '--no-checkout', self.mirror, fullpath],
fail="Failed to clone git mirror {} in directory: {}".format(self.mirror, fullpath),
fail_temporarily=True)
self.source.call([self.source.host_git, 'remote', 'set-url', 'origin', url],
fail='Failed to add remote origin "{}"'.format(url),
cwd=fullpath)
self.source.call([self.source.host_git, 'checkout', '--force', self.ref],
fail="Failed to checkout git ref {}".format(self.ref),
cwd=fullpath)
# List the submodules (path/url tuples) present at the given ref of this repo
def submodule_list(self):
modules = "{}:{}".format(self.ref, GIT_MODULES)
exit_code, output = self.source.check_output(
[self.source.host_git, 'show', modules], cwd=self.mirror)
# If git show reports error code 128 here, we take it to mean there is
# no .gitmodules file to display for the given revision.
if exit_code == 128:
return
elif exit_code != 0:
raise SourceError(
"{plugin}: Failed to show gitmodules at ref {ref}".format(
plugin=self, ref=self.ref))
content = '\n'.join([l.strip() for l in output.splitlines()])
io = StringIO(content)
parser = RawConfigParser()
parser.read_file(io)
for section in parser.sections():
# validate section name against the 'submodule "foo"' pattern
if re.match(r'submodule "(.*)"', section):
path = parser.get(section, 'path')
url = parser.get(section, 'url')
yield (path, url)
# Fetch the ref which this mirror requires its submodule to have,
# at the given ref of this mirror.
def submodule_ref(self, submodule, ref=None):
if not ref:
ref = self.ref
# list objects in the parent repo tree to find the commit
# object that corresponds to the submodule
_, output = self.source.check_output([self.source.host_git, 'ls-tree', ref, submodule],
fail="ls-tree failed for commit {} and submodule: {}".format(
ref, submodule),
cwd=self.mirror)
# read the commit hash from the output
fields = output.split()
if len(fields) >= 2 and fields[1] == 'commit':
submodule_commit = output.split()[2]
# fail if the commit hash is invalid
if len(submodule_commit) != 40:
raise SourceError("{}: Error reading commit information for submodule '{}'"
.format(self.source, submodule))
return submodule_commit
else:
detail = "The submodule '{}' is defined either in the BuildStream source\n".format(submodule) + \
"definition, or in a .gitmodules file. But the submodule was never added to the\n" + \
"underlying git repository with `git submodule add`."
self.source.warn("{}: Ignoring inconsistent submodule '{}'"
.format(self.source, submodule), detail=detail,
warning_token=WARN_INCONSISTENT_SUBMODULE)
return None
def _rebuild_git(self, fullpath):
if not self.tags:
return
with self.source.tempdir() as tmpdir:
included = set()
shallow = set()
for _, commit_ref, _ in self.tags:
if commit_ref == self.ref:
# rev-list does not work in case of same rev
shallow.add(self.ref)
else:
_, out = self.source.check_output([self.source.host_git, 'rev-list',
'--ancestry-path', '--boundary',
'{}..{}'.format(commit_ref, self.ref)],
fail="Failed to get git history {}..{} in directory: {}"
.format(commit_ref, self.ref, fullpath),
fail_temporarily=True,
cwd=self.mirror)
self.source.warn("refs {}..{}: {}".format(commit_ref, self.ref, out.splitlines()))
for line in out.splitlines():
rev = line.lstrip('-')
if line[0] == '-':
shallow.add(rev)
else:
included.add(rev)
shallow -= included
included |= shallow
self.source.call([self.source.host_git, 'init'],
fail="Cannot initialize git repository: {}".format(fullpath),
cwd=fullpath)
for rev in included:
with TemporaryFile(dir=tmpdir) as commit_file:
self.source.call([self.source.host_git, 'cat-file', 'commit', rev],
stdout=commit_file,
fail="Failed to get commit {}".format(rev),
cwd=self.mirror)
commit_file.seek(0, 0)
self.source.call([self.source.host_git, 'hash-object', '-w', '-t', 'commit', '--stdin'],
stdin=commit_file,
fail="Failed to add commit object {}".format(rev),
cwd=fullpath)
with open(os.path.join(fullpath, '.git', 'shallow'), 'w') as shallow_file:
for rev in shallow:
shallow_file.write('{}\n'.format(rev))
for tag, commit_ref, annotated in self.tags:
if annotated:
with TemporaryFile(dir=tmpdir) as tag_file:
tag_data = 'object {}\ntype commit\ntag {}\n'.format(commit_ref, tag)
tag_file.write(tag_data.encode('ascii'))
tag_file.seek(0, 0)
_, tag_ref = self.source.check_output(
[self.source.host_git, 'hash-object', '-w', '-t',
'tag', '--stdin'],
stdin=tag_file,
fail="Failed to add tag object {}".format(tag),
cwd=fullpath)
self.source.call([self.source.host_git, 'tag', tag, tag_ref.strip()],
fail="Failed to tag: {}".format(tag),
cwd=fullpath)
else:
self.source.call([self.source.host_git, 'tag', tag, commit_ref],
fail="Failed to tag: {}".format(tag),
cwd=fullpath)
with open(os.path.join(fullpath, '.git', 'HEAD'), 'w') as head:
self.source.call([self.source.host_git, 'rev-parse', self.ref],
stdout=head,
fail="Failed to parse commit {}".format(self.ref),
cwd=self.mirror)
class _GitSourceBase(Source):
# pylint: disable=attribute-defined-outside-init
def configure(self, node):
ref = self.node_get_member(node, str, 'ref', None)
config_keys = ['url', 'track', 'ref', 'submodules',
'checkout-submodules', 'ref-format',
'track-tags', 'tags']
self.node_validate(node, config_keys + Source.COMMON_CONFIG_KEYS)
tags_node = self.node_get_member(node, list, 'tags', [])
for tag_node in tags_node:
self.node_validate(tag_node, ['tag', 'commit', 'annotated'])
tags = self._load_tags(node)
self.track_tags = self.node_get_member(node, bool, 'track-tags', False)
self.original_url = self.node_get_member(node, str, 'url')
self.mirror = GitMirror(self, '', self.original_url, ref, tags=tags, primary=True)
self.tracking = self.node_get_member(node, str, 'track', None)
self.ref_format = self.node_get_member(node, str, 'ref-format', 'sha1')
if self.ref_format not in ['sha1', 'git-describe']:
provenance = self.node_provenance(node, member_name='ref-format')
raise SourceError("{}: Unexpected value for ref-format: {}".format(provenance, self.ref_format))
# At this point we now know if the source has a ref and/or a track.
# If it is missing both then we will be unable to track or build.
if self.mirror.ref is None and self.tracking is None:
raise SourceError("{}: Git sources require a ref and/or track".format(self),
reason="missing-track-and-ref")
self.checkout_submodules = self.node_get_member(node, bool, 'checkout-submodules', True)
self.submodules = []
# Parse a dict of submodule overrides, stored in the submodule_overrides
# and submodule_checkout_overrides dictionaries.
self.submodule_overrides = {}
self.submodule_checkout_overrides = {}
modules = self.node_get_member(node, Mapping, 'submodules', {})
for path, _ in self.node_items(modules):
submodule = self.node_get_member(modules, Mapping, path)
url = self.node_get_member(submodule, str, 'url', None)
# Make sure to mark all URLs that are specified in the configuration
if url:
self.mark_download_url(url, primary=False)
self.submodule_overrides[path] = url
if 'checkout' in submodule:
checkout = self.node_get_member(submodule, bool, 'checkout')
self.submodule_checkout_overrides[path] = checkout
self.mark_download_url(self.original_url)
def preflight(self):
# Check if git is installed, get the binary at the same time
self.host_git = utils.get_host_tool('git')
def get_unique_key(self):
# Here we want to encode the local name of the repository and
# the ref, if the user changes the alias to fetch the same sources
# from another location, it should not affect the cache key.
key = [self.original_url, self.mirror.ref]
if self.mirror.tags:
tags = {tag: (commit, annotated) for tag, commit, annotated in self.mirror.tags}
key.append({'tags': tags})
# Only modify the cache key with checkout_submodules if it's something
# other than the default behaviour.
if self.checkout_submodules is False:
key.append({"checkout_submodules": self.checkout_submodules})
# We want the cache key to change if the source was
# configured differently, and submodules count.
if self.submodule_overrides:
key.append(self.submodule_overrides)
if self.submodule_checkout_overrides:
key.append({"submodule_checkout_overrides": self.submodule_checkout_overrides})
return key
def get_consistency(self):
if self._have_all_refs():
return Consistency.CACHED
elif self.mirror.ref is not None:
return Consistency.RESOLVED
return Consistency.INCONSISTENT
def load_ref(self, node):
self.mirror.ref = self.node_get_member(node, str, 'ref', None)
self.mirror.tags = self._load_tags(node)
def get_ref(self):
return self.mirror.ref, self.mirror.tags
def set_ref(self, ref_data, node):
if not ref_data:
self.mirror.ref = None
if 'ref' in node:
del node['ref']
self.mirror.tags = []
if 'tags' in node:
del node['tags']
else:
ref, tags = ref_data
node['ref'] = self.mirror.ref = ref
self.mirror.tags = tags
if tags:
node['tags'] = []
for tag, commit_ref, annotated in tags:
data = {'tag': tag,
'commit': commit_ref,
'annotated': annotated}
node['tags'].append(data)
else:
if 'tags' in node:
del node['tags']
def track(self):
# If self.tracking is not specified it's not an error, just silently return
if not self.tracking:
# Is there a better way to check if a ref is given.
if self.mirror.ref is None:
detail = 'Without a tracking branch ref can not be updated. Please ' + \
'provide a ref or a track.'
raise SourceError("{}: No track or ref".format(self),
detail=detail, reason="track-attempt-no-track")
return None
# Resolve the URL for the message
resolved_url = self.translate_url(self.mirror.url)
with self.timed_activity("Tracking {} from {}"
.format(self.tracking, resolved_url),
silent_nested=True):
self.mirror.ensure()
self.mirror._fetch()
# Update self.mirror.ref and node.ref from the self.tracking branch
ret = self.mirror.latest_commit_with_tags(self.tracking, self.track_tags)
return ret
def init_workspace(self, directory):
# XXX: may wish to refactor this as some code dupe with stage()
self._refresh_submodules()
with self.timed_activity('Setting up workspace "{}"'.format(directory), silent_nested=True):
self.mirror.init_workspace(directory)
for mirror in self.submodules:
mirror.init_workspace(directory)
def stage(self, directory):
# Need to refresh submodule list here again, because
# it's possible that we did not load in the main process
# with submodules present (source needed fetching) and
# we may not know about the submodule yet come time to build.
#
self._refresh_submodules()
# Stage the main repo in the specified directory
#
with self.timed_activity("Staging {}".format(self.mirror.url), silent_nested=True):
self.mirror.stage(directory)
for mirror in self.submodules:
mirror.stage(directory)
def get_source_fetchers(self):
yield self.mirror
self._refresh_submodules()
for submodule in self.submodules:
yield submodule
def validate_cache(self):
discovered_submodules = {}
unlisted_submodules = []
invalid_submodules = []
for path, url in self.mirror.submodule_list():
discovered_submodules[path] = url
if self._ignore_submodule(path):
continue
override_url = self.submodule_overrides.get(path)
if not override_url:
unlisted_submodules.append((path, url))
# Warn about submodules which are explicitly configured but do not exist
for path, url in self.submodule_overrides.items():
if path not in discovered_submodules:
invalid_submodules.append((path, url))
if invalid_submodules:
detail = []
for path, url in invalid_submodules:
detail.append(" Submodule URL '{}' at path '{}'".format(url, path))
self.warn("{}: Invalid submodules specified".format(self),
warning_token=WARN_INVALID_SUBMODULE,
detail="The following submodules are specified in the source "
"description but do not exist according to the repository\n\n" +
"\n".join(detail))
# Warn about submodules which exist but have not been explicitly configured
if unlisted_submodules:
detail = []
for path, url in unlisted_submodules:
detail.append(" Submodule URL '{}' at path '{}'".format(url, path))
self.warn("{}: Unlisted submodules exist".format(self),
warning_token=WARN_UNLISTED_SUBMODULE,
detail="The following submodules exist but are not specified " +
"in the source description\n\n" +
"\n".join(detail))
# Assert that the ref exists in the track tag/branch, if track has been specified.
ref_in_track = False
if self.tracking:
_, branch = self.check_output([self.host_git, 'branch', '--list', self.tracking,
'--contains', self.mirror.ref],
cwd=self.mirror.mirror)
if branch:
ref_in_track = True
else:
_, tag = self.check_output([self.host_git, 'tag', '--list', self.tracking,
'--contains', self.mirror.ref],
cwd=self.mirror.mirror)
if tag:
ref_in_track = True
if not ref_in_track:
detail = "The ref provided for the element does not exist locally " + \
"in the provided track branch / tag '{}'.\n".format(self.tracking) + \
"You may wish to track the element to update the ref from '{}' ".format(self.tracking) + \
"with `bst track`,\n" + \
"or examine the upstream at '{}' for the specific ref.".format(self.mirror.url)
self.warn("{}: expected ref '{}' was not found in given track '{}' for staged repository: '{}'\n"
.format(self, self.mirror.ref, self.tracking, self.mirror.url),
detail=detail, warning_token=CoreWarnings.REF_NOT_IN_TRACK)
###########################################################
# Local Functions #
###########################################################
def _have_all_refs(self):
if not self.mirror.has_ref():
return False
self._refresh_submodules()
for mirror in self.submodules:
if not os.path.exists(mirror.mirror):
return False
if not mirror.has_ref():
return False
return True
# Refreshes the GitMirror objects for submodules
#
# Assumes that we have our mirror and we have the ref which we point to
#
def _refresh_submodules(self):
self.mirror.ensure()
submodules = []
for path, url in self.mirror.submodule_list():
# Completely ignore submodules which are disabled for checkout
if self._ignore_submodule(path):
continue
# Allow configuration to override the upstream
# location of the submodules.
override_url = self.submodule_overrides.get(path)
if override_url:
url = override_url
ref = self.mirror.submodule_ref(path)
if ref is not None:
mirror = GitMirror(self, path, url, ref)
submodules.append(mirror)
self.submodules = submodules
def _load_tags(self, node):
tags = []
tags_node = self.node_get_member(node, list, 'tags', [])
for tag_node in tags_node:
tag = self.node_get_member(tag_node, str, 'tag')
commit_ref = self.node_get_member(tag_node, str, 'commit')
annotated = self.node_get_member(tag_node, bool, 'annotated')
tags.append((tag, commit_ref, annotated))
return tags
# Checks whether the plugin configuration has explicitly
# configured this submodule to be ignored
def _ignore_submodule(self, path):
try:
checkout = self.submodule_checkout_overrides[path]
except KeyError:
checkout = self.checkout_submodules
return not checkout