Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • willsalmon/buildstream
  • CumHoleZH/buildstream
  • tchaik/buildstream
  • DCotyPortfolio/buildstream
  • jesusoctavioas/buildstream
  • patrickmmartin/buildstream
  • franred/buildstream
  • tintou/buildstream
  • alatiera/buildstream
  • martinblanchard/buildstream
  • neverdie22042524/buildstream
  • Mattlk13/buildstream
  • PServers/buildstream
  • phamnghia610909/buildstream
  • chiaratolentino/buildstream
  • eysz7-x-x/buildstream
  • kerrick1/buildstream
  • matthew-yates/buildstream
  • twofeathers/buildstream
  • mhadjimichael/buildstream
  • pointswaves/buildstream
  • Mr.JackWilson/buildstream
  • Tw3akG33k/buildstream
  • AlexFazakas/buildstream
  • eruidfkiy/buildstream
  • clamotion2/buildstream
  • nanonyme/buildstream
  • wickyjaaa/buildstream
  • nmanchev/buildstream
  • bojorquez.ja/buildstream
  • mostynb/buildstream
  • highpit74/buildstream
  • Demo112/buildstream
  • ba2014sheer/buildstream
  • tonimadrino/buildstream
  • usuario2o/buildstream
  • Angelika123456/buildstream
  • neo355/buildstream
  • corentin-ferlay/buildstream
  • coldtom/buildstream
  • wifitvbox81/buildstream
  • 358253885/buildstream
  • seanborg/buildstream
  • SotK/buildstream
  • DouglasWinship/buildstream
  • karansthr97/buildstream
  • louib/buildstream
  • bwh-ct/buildstream
  • robjh/buildstream
  • we88c0de/buildstream
  • zhengxian5555/buildstream
51 results
Select Git revision
  • 108-integration-tests-not-idempotent-and-self-contained
  • 131-behavior-of-except-argument-is-frustrating-and-confusing
  • 132-loading-external-plugins-works-without-explicit-requirement-in-project-conf
  • 135-expire-artifacts-in-local-cache
  • 135-expire-artifacts-in-local-cache-clean
  • 138-aborting-bst-push-command-causes-stack-trace-3
  • 142-potentially-printing-provenance-more-than-once-in-loaderrors
  • 188-trigger-external-commands-on-certain-events
  • 214-filter-workspacing-rework
  • 218-allow-specifying-the-chroot-binary-to-use-for-sandboxes-on-unix-platforms
  • 239-use-pylint-for-linting
  • 372-allow-queues-to-run-auxilliary-jobs-after-an-element-s-job-finishes
  • 380-untagged-bst
  • 463-make-dependency-type-default-to-build
  • 537-mirror-fallback-does-not-work-for-git
  • 64-clarify-about-plugins-importing-other-plugins
  • 716-add-example-with-build-directory-outside-of-source-directory
  • 716-add-example-with-build-directory-outside-of-source-directory-2
  • 81-non-empty-read-only-directories-not-handled-during-bst-build-and-others
  • BenjaminSchubert/fix-quota-tests
  • Qinusty/235-manifest
  • Qinusty/397
  • Qinusty/470-bst-track-yaml-indent
  • Qinusty/553-backport-1.2
  • Qinusty/663-missing-cache-key-workspace-open
  • Qinusty/backport-576
  • Qinusty/backport-skipped-562
  • Qinusty/gitlab-ci
  • Qinusty/gitlab-ci-duration
  • Qinusty/message-helpers
  • Qinusty/pytest_cache_gitignore
  • abderrahim/cached-failure
  • abderrahim/cachekey-strictrebuild
  • abderrahim/cleanup-speedup
  • abderrahim/makemaker
  • abderrahim/resolve-remotes
  • abderrahim/source-cache
  • abderrahim/stage-artifact-scriptelement
  • abderrahim/virtual-extract
  • adamjones/contributing
  • adamjones/contribution-guide
  • aevri/assert_no_unexpected_size_writes
  • aevri/casdprocessmanager2
  • aevri/check_spawn_ci_working
  • aevri/enable_spawn_ci_4
  • aevri/enable_spawn_ci_6
  • aevri/enable_spawn_ci_7
  • aevri/json_artifact_meta
  • aevri/picklable_jobs
  • aevri/plugin_venvs
  • aevri/provenance_scope
  • aevri/pylint_ignore_argsdiff
  • aevri/safe_noninteractive
  • aevri/win32
  • aevri/win32_minimal
  • aevri/win32_minimal_seemstowork_20190829
  • aevri/win32_receive_signals
  • aevri/win32_temptext
  • alexfazakas/add-bst-init-argument
  • alexfazakas/use-merge-trains
  • always-do-linting
  • another-segfault
  • becky/locally_downloaded_files
  • becky/shell_launch_errors
  • bschubert/add-isolated-tests
  • bschubert/isort
  • bschubert/merge-parent-child-job
  • bschubert/more-mypy
  • bschubert/no-multiprocessing-bak
  • bschubert/no-multiprocessing-full
  • bschubert/optimize-deps
  • bschubert/optimize-element-init
  • bschubert/optimize-loader-sorting
  • bschubert/optimize-mapping-node
  • bschubert/optimize-splits
  • bschubert/remove-multiline-switch-for-re
  • bschubert/remove-parent-child-pipe
  • bschubert/remove-pip-source
  • bschubert/standardize-source-tests
  • bschubert/test-plugins
  • bschubert/update-coverage
  • bst-1
  • bst-1.0
  • bst-1.2
  • bst-1.4
  • bst-pull
  • bst-push
  • buildbox-pre-will
  • cache-key-v0
  • caching_build_trees
  • cascache_timeouts
  • chandan/automate-pypi-release
  • chandan/cli-deps
  • chandan/contrib-dependencies
  • chandan/element-cache
  • chandan/enums
  • chandan/extras-require
  • chandan/macos-multiprocessing
  • chandan/moar-parallelism
  • chandan/moar-runners
  • 1.0.0
  • 1.0.1
  • 1.1.0
  • 1.1.1
  • 1.1.2
  • 1.1.3
  • 1.1.4
  • 1.1.5
  • 1.1.6
  • 1.1.7
  • 1.2.0
  • 1.2.1
  • 1.2.2
  • 1.2.3
  • 1.2.4
  • 1.2.5
  • 1.2.6
  • 1.2.7
  • 1.2.8
  • 1.3.0
  • 1.3.1
  • 1.4.0
  • 1.4.1
  • 1.4.2
  • 1.4.3
  • 1.5.0
  • 1.5.1
  • 1.6.0
  • 1.6.1
  • 1.91.0
  • 1.91.1
  • 1.91.2
  • 1.91.3
  • 1.93.0
  • 1.93.1
  • 1.93.2
  • 1.93.3
  • 1.93.4
  • 1.93.5
  • CROSS_PLATFORM_SEPT_2017
  • PRE_CAS_MERGE_JULY_2018
  • bst-1-branchpoint
  • bst-1.2-branchpoint
  • bst-1.4-branchpoint
144 results
Show changes
Commits on Source (62)
Showing
with 261 additions and 423 deletions
......@@ -15,15 +15,19 @@ before_script:
- df -h
# Store cache in the project directory
- mkdir -p "$(pwd)/cache"
- if [ -d "$(pwd)/cache" ]; then chmod -R a+rw "$(pwd)/cache"; fi
- export XDG_CACHE_HOME="$(pwd)/cache"
- adduser -m buildstream
- chown -R buildstream:buildstream .
# Run premerge commits
#
pytest:
stage: test
script:
- python3 setup.py test --index-url invalid://uri
# We run as a simple user to test for permission issues
- su buildstream -c 'python3 setup.py test --index-url invalid://uri'
- mkdir -p coverage-pytest/
- cp .coverage.* coverage-pytest/coverage.pytest
artifacts:
......@@ -38,7 +42,10 @@ integration_linux:
script:
- pip3 install --no-index .
- cd integration-tests
- ./run-test.sh --arg --colors --cov ../.coveragerc --sources ${XDG_CACHE_HOME}/buildstream/sources test
# We run as a simple user to test for permission issues
- su buildstream -c './run-test.sh --arg --colors --cov ../.coveragerc --sources ${XDG_CACHE_HOME}/buildstream/sources test'
- cd ..
- mkdir -p coverage-linux/
- cp integration-tests/.coverage coverage-linux/coverage.linux
......@@ -59,7 +66,10 @@ pytest_unix:
# disappear unless we mark it as user-installed.
- dnf mark install fuse-libs
- dnf erase -y bubblewrap ostree
# Since the unix platform is required to run as root, no user change required
- python3 setup.py test --index-url invalid://uri
- mkdir -p coverage-pytest-unix
- cp .coverage.* coverage-pytest-unix/coverage.pytest-unix
artifacts:
......@@ -73,7 +83,10 @@ integration_unix:
script:
- pip3 install --no-index .
- cd integration-tests
# Since the unix platform is required to run as root, no user change required
- ./run-test.sh --arg --colors --cov ../.coveragerc --sources ${XDG_CACHE_HOME}/buildstream/sources test
- cd ..
- mkdir -p coverage-unix/
- cp integration-tests/.coverage coverage-unix/coverage.unix
......@@ -118,7 +131,7 @@ pages:
- dnf install -y python2
- pip3 install sphinx
- pip3 install sphinx-click
- pip3 install --user -e --no-index .
- pip3 install --user .
- make -C doc
- mv doc/build/html public
artifacts:
......
......@@ -53,11 +53,19 @@ def buildref(element, key):
# Args:
# context (Context): The BuildStream context
# project (Project): The BuildStream project
# enable_push (bool): Whether pushing is allowed
#
# Pushing is explicitly disabled by the platform in some cases,
# like when we are falling back to functioning without using
# user namespaces.
#
class OSTreeCache(ArtifactCache):
def __init__(self, context, project):
def __init__(self, context, project, enable_push):
super().__init__(context, project)
self.enable_push = enable_push
ostreedir = os.path.join(context.artifactdir, 'ostree')
self.repo = _ostree.ensure(ostreedir, False)
......@@ -66,6 +74,11 @@ class OSTreeCache(ArtifactCache):
self._remote_refs = None
def can_push(self):
if self.enable_push:
return super().can_push()
return False
def preflight(self):
if self.can_push() and not self.artifact_push.startswith("/"):
try:
......@@ -88,7 +101,7 @@ class OSTreeCache(ArtifactCache):
#
def contains(self, element, strength=None):
if strength is None:
strength = _KeyStrength.STRONG if self.context.strict_build_plan else _KeyStrength.WEAK
strength = _KeyStrength.STRONG if element._get_strict() else _KeyStrength.WEAK
key = element._get_cache_key(strength)
if not key:
......@@ -128,7 +141,7 @@ class OSTreeCache(ArtifactCache):
#
def remote_contains(self, element, strength=None):
if strength is None:
strength = _KeyStrength.STRONG if self.context.strict_build_plan else _KeyStrength.WEAK
strength = _KeyStrength.STRONG if element._get_strict() else _KeyStrength.WEAK
key = element._get_cache_key(strength)
if not key:
......@@ -160,7 +173,7 @@ class OSTreeCache(ArtifactCache):
# resolve weak cache key, if artifact is missing for strong cache key
# and the context allows use of weak cache keys
if not rev and not self.context.strict_build_plan:
if not rev and not element._get_strict():
ref = buildref(element, element._get_cache_key(strength=_KeyStrength.WEAK))
rev = _ostree.checksum(self.repo, ref)
......
......@@ -251,7 +251,7 @@ class TarCache(ArtifactCache):
#
def contains(self, element, strength=None):
if strength is None:
strength = _KeyStrength.STRONG if self.context.strict_build_plan else _KeyStrength.WEAK
strength = _KeyStrength.STRONG if element._get_strict() else _KeyStrength.WEAK
key = element._get_cache_key(strength)
......
......@@ -203,7 +203,7 @@ def cli(context, **kwargs):
def build(app, target, all, track):
"""Build elements in a pipeline"""
app.initialize(target, rewritable=track, inconsistent=track)
app.initialize(target, rewritable=track, inconsistent=track, fetch_remote_refs=True)
app.print_heading()
try:
app.pipeline.build(app.scheduler, all, track)
......@@ -314,7 +314,7 @@ def pull(app, target, deps):
none: No dependencies, just the element itself
all: All dependencies
"""
app.initialize(target)
app.initialize(target, fetch_remote_refs=True)
try:
to_pull = app.pipeline.deps_elements(deps)
app.pipeline.pull(app.scheduler, to_pull)
......@@ -344,7 +344,7 @@ def push(app, target, deps):
none: No dependencies, just the element itself
all: All dependencies
"""
app.initialize(target)
app.initialize(target, fetch_remote_refs=True)
try:
to_push = app.pipeline.deps_elements(deps)
app.pipeline.push(app.scheduler, to_push)
......@@ -370,10 +370,12 @@ def push(app, target, deps):
@click.option('--format', '-f', metavar='FORMAT', default=None,
type=click.STRING,
help='Format string for each element')
@click.option('--downloadable', default=False, is_flag=True,
help="Refresh downloadable state")
@click.argument('target',
type=click.Path(dir_okay=False, readable=True))
@click.pass_obj
def show(app, target, deps, except_, order, format):
def show(app, target, deps, except_, order, format, downloadable):
"""Show elements in the pipeline
By default this will show all of the dependencies of the
......@@ -420,7 +422,7 @@ def show(app, target, deps, except_, order, format):
bst show target.bst --format \\
$'---------- %{name} ----------\\n%{vars}'
"""
app.initialize(target)
app.initialize(target, fetch_remote_refs=downloadable)
try:
dependencies = app.pipeline.deps_elements(deps, except_)
except PipelineError as e:
......@@ -548,7 +550,7 @@ def source_bundle(app, target, force, directory,
dependencies = app.pipeline.deps_elements('all', except_)
app.print_heading(dependencies)
app.pipeline.source_bundle(app.scheduler, dependencies, force, track,
compression, except_, directory)
compression, directory)
click.echo("")
except _BstError as e:
click.echo("")
......@@ -669,13 +671,13 @@ def workspace_list(app):
context.load(config)
except _BstError as e:
click.echo("Error loading user configuration: {}".format(e))
sys.exit(1)
sys.exit(-1)
try:
project = Project(directory, context)
except _BstError as e:
click.echo("Error loading project: {}".format(e))
sys.exit(1)
sys.exit(-1)
workspaces = []
for element_name, source_index, directory in project._list_workspaces():
......@@ -702,6 +704,7 @@ class App():
def __init__(self, main_options):
self.main_options = main_options
self.messaging_enabled = False
self.startup_messages = []
self.logger = None
self.status = None
self.target = None
......@@ -758,7 +761,7 @@ class App():
#
# Initialize the main pipeline
#
def initialize(self, target, rewritable=False, inconsistent=False):
def initialize(self, target, rewritable=False, inconsistent=False, fetch_remote_refs=False):
self.target = target
profile_start(Topics.LOAD_PIPELINE, target.replace(os.sep, '-') + '-' +
......@@ -772,7 +775,7 @@ class App():
self.context.load(config)
except _BstError as e:
click.echo("Error loading user configuration: %s" % str(e))
sys.exit(1)
sys.exit(-1)
# Override things in the context from our command line options,
# the command line when used, trumps the config files.
......@@ -830,19 +833,20 @@ class App():
self.project = Project(directory, self.context)
except _BstError as e:
click.echo("Error loading project: %s" % str(e))
sys.exit(1)
sys.exit(-1)
try:
self.pipeline = Pipeline(self.context, self.project, target,
inconsistent=inconsistent,
rewritable=rewritable,
fetch_remote_refs=fetch_remote_refs,
load_ticker=self.load_ticker,
resolve_ticker=self.resolve_ticker,
remote_ticker=self.remote_ticker,
cache_ticker=self.cache_ticker)
except _BstError as e:
click.echo("Error loading pipeline: %s" % str(e))
sys.exit(1)
sys.exit(-1)
# Create our status printer, only available in interactive
self.status = Status(self.content_profile, self.format_profile,
......@@ -1020,6 +1024,11 @@ class App():
styling=self.colors,
deps=deps)
# Print any held messages from startup after printing the heading
for message in self.startup_messages:
self.message_handler(message, self.context)
self.startup_messages = []
#
# Print a summary of the queues
#
......@@ -1036,6 +1045,8 @@ class App():
# Drop messages by default in the beginning while
# loading the pipeline, unless debug is specified.
if not self.messaging_enabled:
if message.message_type in unconditional_messages:
self.startup_messages.append(message)
return
# Drop status messages from the UI if not verbose, we'll still see
......
......@@ -179,11 +179,11 @@ class ElementName(Widget):
def render(self, message):
element_id = message.task_id or message.unique_id
if element_id is not None:
plugin = _plugin_lookup(element_id)
name = plugin.name
else:
name = ''
if element_id is None:
return ""
plugin = _plugin_lookup(element_id)
name = plugin.name
# Sneak the action name in with the element name
action_name = message.action_name
......@@ -226,10 +226,12 @@ class CacheKey(Widget):
missing = False
key = ' ' * self.key_length
element_id = message.task_id or message.unique_id
if element_id is not None:
plugin = _plugin_lookup(element_id)
if isinstance(plugin, Element):
_, key, missing = plugin._get_full_display_key()
if element_id is None:
return ""
plugin = _plugin_lookup(element_id)
if isinstance(plugin, Element):
_, key, missing = plugin._get_full_display_key()
if message.message_type in [MessageType.FAIL, MessageType.BUG]:
text = self.err_profile.fmt(key)
......
......@@ -50,7 +50,7 @@ class OptionPool():
self.element_path = element_path
# jinja2 environment, with default globals cleared out of the way
self.environment = jinja2.Environment()
self.environment = jinja2.Environment(undefined=jinja2.StrictUndefined)
self.environment.globals = []
# load()
......@@ -226,8 +226,12 @@ class OptionPool():
"{}: Conditional statement has more than one key".format(p))
expression, value = tuples[0]
if self.evaluate(expression):
_yaml.composite(node, value)
try:
if self.evaluate(expression):
_yaml.composite(node, value)
except LoadError as e:
# Prepend the provenance of the error
raise LoadError(e.reason, "{}: {}".format(p, e)) from e
return True
......
......@@ -105,6 +105,7 @@ class Planner():
# current source refs will not be the effective refs.
# rewritable (bool): Whether the loaded files should be rewritable
# this is a bit more expensive due to deep copies
# fetch_remote_refs (bool): Whether to attempt to check remote artifact server for new refs
# load_ticker (callable): A function which will be called for each loaded element
# resolve_ticker (callable): A function which will be called for each resolved element
# cache_ticker (callable): A function which will be called for each element
......@@ -126,6 +127,7 @@ class Pipeline():
def __init__(self, context, project, target,
inconsistent=False,
rewritable=False,
fetch_remote_refs=False,
load_ticker=None,
resolve_ticker=None,
remote_ticker=None,
......@@ -144,7 +146,8 @@ class Pipeline():
load_ticker(None)
# Load selected platform
self.platform = Platform.get_platform(context, project)
Platform._create_instance(context, project)
self.platform = Platform.get_platform()
self.artifacts = self.platform.artifactcache
# Create the factories after resolving the project
......@@ -173,13 +176,13 @@ class Pipeline():
self.project._set_workspace(element, source, workspace)
if self.artifacts.can_fetch():
if fetch_remote_refs and self.artifacts.can_fetch():
try:
if remote_ticker:
remote_ticker(context.artifact_pull)
remote_ticker(self.artifacts.artifact_pull)
self.artifacts.fetch_remote_refs()
except _ArtifactError:
self.message(self.target, MessageType.WARN, "Failed to fetch remote refs")
self.message(MessageType.WARN, "Failed to fetch remote refs")
self.artifacts.set_offline()
for element in self.dependencies(Scope.ALL):
......@@ -225,7 +228,7 @@ class Pipeline():
"Try tracking these elements first with `bst track`\n\n"
for element in inconsistent:
detail += " " + element.name + "\n"
self.message(self.target, MessageType.ERROR, "Inconsistent pipeline", detail=detail)
self.message(MessageType.ERROR, "Inconsistent pipeline", detail=detail)
raise PipelineError()
# Generator function to iterate over only the elements
......@@ -239,13 +242,10 @@ class Pipeline():
# Local message propagator
#
def message(self, plugin, message_type, message, **kwargs):
def message(self, message_type, message, **kwargs):
args = dict(kwargs)
self.context._message(
Message(plugin._get_unique_id(),
message_type,
message,
**args))
Message(None, message_type, message, **args))
# Internal: Instantiates plugin-provided Element and Source instances
# from MetaElement and MetaSource objects
......@@ -290,14 +290,14 @@ class Pipeline():
def can_push_remote_artifact_cache(self):
if self.artifacts.can_push():
starttime = datetime.datetime.now()
self.message(self.target, MessageType.START, "Checking connectivity to remote artifact cache")
self.message(MessageType.START, "Checking connectivity to remote artifact cache")
try:
self.artifacts.preflight()
except _ArtifactError as e:
self.message(self.target, MessageType.WARN, str(e),
self.message(MessageType.WARN, str(e),
elapsed=datetime.datetime.now() - starttime)
return False
self.message(self.target, MessageType.SUCCESS, "Connectivity OK",
self.message(MessageType.SUCCESS, "Connectivity OK",
elapsed=datetime.datetime.now() - starttime)
return True
else:
......@@ -315,7 +315,6 @@ class Pipeline():
# Args:
# scheduler (Scheduler): The scheduler to run this pipeline on
# dependencies (list): List of elements to track
# except_ (list): List of elements to except from tracking
#
# If no error is encountered while tracking, then the project files
# are rewritten inline.
......@@ -327,20 +326,20 @@ class Pipeline():
track.enqueue(dependencies)
self.session_elements = len(dependencies)
self.message(self.target, MessageType.START, "Starting track")
self.message(MessageType.START, "Starting track")
elapsed, status = scheduler.run([track])
changed = len(track.processed_elements)
if status == SchedStatus.ERROR:
self.message(self.target, MessageType.FAIL, "Track failed", elapsed=elapsed)
self.message(MessageType.FAIL, "Track failed", elapsed=elapsed)
raise PipelineError()
elif status == SchedStatus.TERMINATED:
self.message(self.target, MessageType.WARN,
self.message(MessageType.WARN,
"Terminated after updating {} source references".format(changed),
elapsed=elapsed)
raise PipelineError()
else:
self.message(self.target, MessageType.SUCCESS,
self.message(MessageType.SUCCESS,
"Updated {} source references".format(changed),
elapsed=elapsed)
......@@ -352,7 +351,6 @@ class Pipeline():
# scheduler (Scheduler): The scheduler to run this pipeline on
# dependencies (list): List of elements to fetch
# track_first (bool): Track new source references before fetching
# except_ (list): List of elements to except from fetching
#
def fetch(self, scheduler, dependencies, track_first):
......@@ -379,20 +377,20 @@ class Pipeline():
fetch.enqueue(plan)
queues = [fetch]
self.message(self.target, MessageType.START, "Fetching {} elements".format(len(plan)))
self.message(MessageType.START, "Fetching {} elements".format(len(plan)))
elapsed, status = scheduler.run(queues)
fetched = len(fetch.processed_elements)
if status == SchedStatus.ERROR:
self.message(self.target, MessageType.FAIL, "Fetch failed", elapsed=elapsed)
self.message(MessageType.FAIL, "Fetch failed", elapsed=elapsed)
raise PipelineError()
elif status == SchedStatus.TERMINATED:
self.message(self.target, MessageType.WARN,
self.message(MessageType.WARN,
"Terminated after fetching {} elements".format(fetched),
elapsed=elapsed)
raise PipelineError()
else:
self.message(self.target, MessageType.SUCCESS,
self.message(MessageType.SUCCESS,
"Fetched {} elements".format(fetched),
elapsed=elapsed)
......@@ -408,7 +406,7 @@ class Pipeline():
#
def build(self, scheduler, build_all, track_first):
if len(self.unused_workspaces) > 0:
self.message(self.target, MessageType.WARN, "Unused workspaces",
self.message(MessageType.WARN, "Unused workspaces",
detail="\n".join([el + "-" + str(src) for el, src, _
in self.unused_workspaces]))
......@@ -443,18 +441,18 @@ class Pipeline():
self.session_elements = len(plan)
self.message(self.target, MessageType.START, "Starting build")
self.message(MessageType.START, "Starting build")
elapsed, status = scheduler.run(queues)
built = len(build.processed_elements)
if status == SchedStatus.ERROR:
self.message(self.target, MessageType.FAIL, "Build failed", elapsed=elapsed)
self.message(MessageType.FAIL, "Build failed", elapsed=elapsed)
raise PipelineError()
elif status == SchedStatus.TERMINATED:
self.message(self.target, MessageType.WARN, "Terminated", elapsed=elapsed)
self.message(MessageType.WARN, "Terminated", elapsed=elapsed)
raise PipelineError()
else:
self.message(self.target, MessageType.SUCCESS, "Build Complete", elapsed=elapsed)
self.message(MessageType.SUCCESS, "Build Complete", elapsed=elapsed)
# checkout()
#
......@@ -482,7 +480,7 @@ class Pipeline():
# commands for cross-build artifacts.
can_integrate = (self.context.host_arch == self.context.target_arch)
if not can_integrate:
self.message(self.target, MessageType.WARN,
self.message(MessageType.WARN,
"Host-incompatible checkout -- no integration commands can be run")
# Stage deps into a temporary sandbox first
......@@ -546,15 +544,15 @@ class Pipeline():
fetched = len(fetch.processed_elements)
if status == SchedStatus.ERROR:
self.message(self.target, MessageType.FAIL, "Tracking failed", elapsed=elapsed)
self.message(MessageType.FAIL, "Tracking failed", elapsed=elapsed)
raise PipelineError()
elif status == SchedStatus.TERMINATED:
self.message(self.target, MessageType.WARN,
self.message(MessageType.WARN,
"Terminated after fetching {} elements".format(fetched),
elapsed=elapsed)
raise PipelineError()
else:
self.message(self.target, MessageType.SUCCESS,
self.message(MessageType.SUCCESS,
"Fetched {} elements".format(fetched), elapsed=elapsed)
if not no_checkout:
......@@ -655,20 +653,20 @@ class Pipeline():
pull.enqueue(plan)
queues = [pull]
self.message(self.target, MessageType.START, "Pulling {} artifacts".format(len(plan)))
self.message(MessageType.START, "Pulling {} artifacts".format(len(plan)))
elapsed, status = scheduler.run(queues)
pulled = len(pull.processed_elements)
if status == SchedStatus.ERROR:
self.message(self.target, MessageType.FAIL, "Pull failed", elapsed=elapsed)
self.message(MessageType.FAIL, "Pull failed", elapsed=elapsed)
raise PipelineError()
elif status == SchedStatus.TERMINATED:
self.message(self.target, MessageType.WARN,
self.message(MessageType.WARN,
"Terminated after pulling {} elements".format(pulled),
elapsed=elapsed)
raise PipelineError()
else:
self.message(self.target, MessageType.SUCCESS,
self.message(MessageType.SUCCESS,
"Pulled {} complete".format(pulled),
elapsed=elapsed)
......@@ -693,20 +691,20 @@ class Pipeline():
push.enqueue(plan)
queues = [push]
self.message(self.target, MessageType.START, "Pushing {} artifacts".format(len(plan)))
self.message(MessageType.START, "Pushing {} artifacts".format(len(plan)))
elapsed, status = scheduler.run(queues)
pushed = len(push.processed_elements)
if status == SchedStatus.ERROR:
self.message(self.target, MessageType.FAIL, "Push failed", elapsed=elapsed)
self.message(MessageType.FAIL, "Push failed", elapsed=elapsed)
raise PipelineError()
elif status == SchedStatus.TERMINATED:
self.message(self.target, MessageType.WARN,
self.message(MessageType.WARN,
"Terminated after pushing {} elements".format(pushed),
elapsed=elapsed)
raise PipelineError()
else:
self.message(self.target, MessageType.SUCCESS,
self.message(MessageType.SUCCESS,
"Pushed {} complete".format(pushed),
elapsed=elapsed)
......@@ -739,7 +737,7 @@ class Pipeline():
for element_name in removed:
element = search_tree(element_name)
if element is None:
raise PipelineError("No element named {}".format(element_name))
raise PipelineError("No element named {} in the loaded pipeline".format(element_name))
to_remove.update(element.dependencies(Scope.ALL))
......@@ -801,7 +799,7 @@ class Pipeline():
# directory (str): The directory to checkout the artifact to
#
def source_bundle(self, scheduler, dependencies, force,
track_first, compression, except_, directory):
track_first, compression, directory):
# Find the correct filename for the compression algorithm
tar_location = os.path.join(directory, self.target.normal_name + ".tar")
......
......@@ -20,8 +20,11 @@
import os
import sys
import subprocess
from .. import utils
from .. import PlatformError
from .._message import Message, MessageType
from ..sandbox import SandboxBwrap
from .._artifactcache.ostreecache import OSTreeCache
......@@ -33,11 +36,47 @@ class Linux(Platform):
def __init__(self, context, project):
super().__init__(context, project)
self._artifact_cache = OSTreeCache(context, project)
self._user_ns_available = False
self.check_user_ns_available(context)
self._artifact_cache = OSTreeCache(context, project, self._user_ns_available)
def check_user_ns_available(self, context):
# Here, lets check if bwrap is able to create user namespaces,
# issue a warning if it's not available, and save the state
# locally so that we can inform the sandbox to not try it
# later on.
bwrap = utils.get_host_tool('bwrap')
whoami = utils.get_host_tool('whoami')
try:
output = subprocess.check_output([
bwrap,
'--ro-bind', '/', '/',
'--unshare-user',
'--uid', '0', '--gid', '0',
whoami,
])
output = output.decode('UTF-8').strip()
except subprocess.CalledProcessError:
output = ''
if output == 'root':
self._user_ns_available = True
# Issue a warning
if not self._user_ns_available:
context._message(
Message(None, MessageType.WARN,
"Unable to create user namespaces with bubblewrap, resorting to fallback",
detail="Some builds may not function due to lack of uid / gid 0, " +
"artifacts created will not be trusted for push purposes."))
@property
def artifactcache(self):
return self._artifact_cache
def create_sandbox(self, *args, **kwargs):
# Inform the bubblewrap sandbox as to whether it can use user namespaces or not
kwargs['user_ns_available'] = self._user_ns_available
return SandboxBwrap(*args, **kwargs)
......@@ -27,6 +27,7 @@ from .. import PlatformError, ProgramNotFoundError, ImplError
class Platform():
_instance = None
# Platform()
#
......@@ -42,8 +43,7 @@ class Platform():
self.project = project
@classmethod
def get_platform(cls, *args, **kwargs):
def _create_instance(cls, *args, **kwargs):
if sys.platform.startswith('linux'):
backend = 'linux'
else:
......@@ -62,7 +62,13 @@ class Platform():
else:
raise PlatformError("No such platform: '{}'".format(backend))
return PlatformImpl(*args, **kwargs)
cls._instance = PlatformImpl(*args, **kwargs)
@classmethod
def get_platform(cls):
if not cls._instance:
raise PlatformError("Platform needs to be initialized first")
return cls._instance
##################################################################
# Platform properties #
......
......@@ -114,7 +114,6 @@ class Job():
# Wait for it to complete
self.watcher = asyncio.get_child_watcher()
self.watcher.attach_loop(self.scheduler.loop)
self.watcher.add_child_handler(self.pid, self.child_complete, self.element)
# shutdown()
......@@ -345,12 +344,15 @@ class Job():
output.flush()
def child_message_handler(self, message, context):
plugin = _plugin_lookup(message.unique_id)
# Tag them on the way out the door...
message.action_name = self.action_name
message.task_id = self.element._get_unique_id()
# Use the plugin for the task for the output, not a plugin
# which might be acting on behalf of the task
plugin = _plugin_lookup(message.task_id)
# Log first
self.child_log(plugin, message, context)
......
......@@ -112,7 +112,10 @@ class Scheduler():
for queue in queues:
queue.attach(self)
# Ensure that we have a fresh new event loop, in case we want
# to run another test in this thread.
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
# Add timeouts
if self.ticker_callback:
......
......@@ -743,13 +743,17 @@ def composite_dict(target, source, path=None):
# Like composite_dict(), but raises an all purpose LoadError for convenience
#
def composite(target, source):
provenance = node_get_provenance(source)
if not hasattr(source, 'get'):
raise LoadError(LoadErrorReason.ILLEGAL_COMPOSITE,
"Only values of type 'dict' can be composed.")
source_provenance = node_get_provenance(source)
try:
composite_dict(target, source)
except CompositeTypeError as e:
error_prefix = ""
if provenance:
error_prefix = "[%s]: " % str(provenance)
if source_provenance:
error_prefix = "[%s]: " % str(source_provenance)
raise LoadError(LoadErrorReason.ILLEGAL_COMPOSITE,
"%sExpected '%s' type for configuration '%s', instead received '%s'" %
(error_prefix,
......
......@@ -199,6 +199,19 @@ class BuildElement(Element):
if exitcode != 0:
raise ElementError("Command '{}' failed with exitcode {}".format(cmd, exitcode))
# %{install-root}/%{build-root} should normally not be written
# to - if an element later attempts to stage to a location
# that is not empty, we abort the build - in this case this
# will almost certainly happen.
staged_build = os.path.join(self.get_variable('install-root'),
self.get_variable('build-root'))
if os.path.isdir(staged_build) and os.listdir(staged_build):
self.warn("Writing to %{install-root}/%{build-root}.",
detail="Writing to this directory will almost " +
"certainly cause an error, since later elements " +
"will not be allowed to stage to %{build-root}.")
# Return the payload, this is configurable but is generally
# always the /buildstream/install directory
return self.get_variable('install-root')
......
......@@ -59,7 +59,7 @@ class Context():
self.target_arch = target_arch or host_arch
"""The machine on which the results of the build should execute"""
self.strict_build_plan = True
self.strict_build_plan = None
"""Whether elements must be rebuilt when their dependencies have changed"""
self.sourcedir = None
......@@ -161,15 +161,10 @@ class Context():
_yaml.composite(defaults, user_config)
_yaml.node_validate(defaults, [
'strict', 'sourcedir',
'builddir', 'artifactdir',
'logdir', 'scheduler',
'artifacts', 'logging',
'projects',
'sourcedir', 'builddir', 'artifactdir', 'logdir',
'scheduler', 'artifacts', 'logging', 'projects',
])
self.strict_build_plan = _yaml.node_get(defaults, bool, 'strict')
for dir in ['sourcedir', 'builddir', 'artifactdir', 'logdir']:
# Allow the ~ tilde expansion and any environment variables in
# path specification in the config files.
......@@ -218,7 +213,7 @@ class Context():
# Shallow validation of overrides, parts of buildstream which rely
# on the overrides are expected to validate elsewhere.
for project_name, overrides in _yaml.node_items(self._project_overrides):
_yaml.node_validate(overrides, ['artifacts', 'options'])
_yaml.node_validate(overrides, ['artifacts', 'options', 'strict'])
profile_end(Topics.LOAD_CONTEXT, 'load')
......@@ -248,6 +243,25 @@ class Context():
def _get_overrides(self, project_name):
return _yaml.node_get(self._project_overrides, Mapping, project_name, default_value={})
# _get_strict():
#
# Fetch whether we are strict or not
#
# Args:
# project_name (str): The project name
#
# Returns:
# (bool): Whether or not to use strict build plan
#
def _get_strict(self, project_name):
# If it was set by the CLI, it overrides any config
if self.strict_build_plan is not None:
return self.strict_build_plan
overrides = self._get_overrides(project_name)
return _yaml.node_get(overrides, bool, 'strict', default_value=True)
# _get_cache_key():
#
# Returns the cache key, calculating it if necessary
......
......@@ -39,8 +39,17 @@ variables:
# Indicates the build installation directory in the sandbox
install-root: /buildstream/install
# Define some patterns which might be used in multiple
# elements
# Arguments for tooling used when stripping debug symbols
objcopy-link-args: --add-gnu-debuglink
objcopy-extract-args: |
--only-keep-debug --compress-debug-sections
strip-args: |
--remove-section=.comment --remove-section=.note --strip-unneeded
# Generic implementation for stripping debugging symbols
strip-binaries: |
find "%{install-root}" -type f \
......@@ -53,15 +62,16 @@ variables:
fi
debugfile="%{install-root}%{debugdir}/$(basename "$1")"
mkdir -p "$(dirname "$debugfile")"
objcopy --only-keep-debug "$1" "$debugfile"
objcopy %{objcopy-extract-args} "$1" "$debugfile"
chmod 644 "$debugfile"
strip --remove-section=.comment --remove-section=.note --strip-unneeded "$1"
objcopy --add-gnu-debuglink "$debugfile" "$1"' - {} ';'
strip %{strip-args} "$1"
objcopy %{objcopy-link-args} "$debugfile" "$1"' - {} ';'
# Generic implementation for reproducible python builds
fix-pyc-timestamps: |
find "%{install-root}" -name '*.pyc' \
-exec dd if=/dev/zero of={} bs=1 count=4 seek=4 conv=notrunc ';'
find "%{install-root}" -name '*.pyc' -exec \
dd if=/dev/zero of={} bs=1 count=4 seek=4 conv=notrunc ';'
# Base sandbox environment, can be overridden by plugins
......@@ -76,6 +86,9 @@ environment:
HOME: /tmp
TZ: UTC
# For reproducible builds we use 2011-11-11 as a constant
SOURCE_DATE_EPOCH: 1320937200
# List of environment variables which should not be taken into
# account when calculating a cache key for a given element.
#
......
......@@ -10,9 +10,6 @@
# paths.
#
# Whether elements must be rebuilt when their dependencies have changed
strict: True
# Location to store sources
sourcedir: ${XDG_CACHE_HOME}/buildstream/sources
......@@ -97,19 +94,3 @@ logging:
%{state: >12} %{key} %{name} %{workspace-dirs}
#
# Per project overrides
#
# Some settings defined in the project configuration can be overridden by
# the user configuration.
#
# projects:
#
# project1:
# artifacts:
# pull-url: https://artifacts.com
# push-url: artifacts@artifacts.com:artifacts
# push-port: 443
#
# project2:
# ...
......@@ -718,7 +718,7 @@ class Element(Plugin):
self.__strong_cached = None
if strength is None:
strength = _KeyStrength.STRONG if self.get_context().strict_build_plan else _KeyStrength.WEAK
strength = _KeyStrength.STRONG if self._get_strict() else _KeyStrength.WEAK
if recalculate is not False:
if self.__cached is None and self._get_cache_key() is not None:
......@@ -766,7 +766,7 @@ class Element(Plugin):
self.__remotely_strong_cached = None
if strength is None:
strength = _KeyStrength.STRONG if self.get_context().strict_build_plan else _KeyStrength.WEAK
strength = _KeyStrength.STRONG if self._get_strict() else _KeyStrength.WEAK
if recalculate is not False:
if self.__remotely_cached is None and self._get_cache_key() is not None:
......@@ -967,7 +967,7 @@ class Element(Plugin):
if self._consistency() == Consistency.INCONSISTENT:
cache_key = None
elif context.strict_build_plan or self._cached(strength=_KeyStrength.STRONG):
elif self._get_strict() or self._cached(strength=_KeyStrength.STRONG):
cache_key = self._get_cache_key()
elif self._cached():
cache_key = self._get_cache_key_from_artifact()
......@@ -1382,6 +1382,10 @@ class Element(Plugin):
#
def _stage_sources_at(self, directory):
with self.timed_activity("Staging sources", silent_nested=True):
if os.path.isdir(directory) and os.listdir(directory):
raise ElementError("Staging directory '{}' is not empty".format(directory))
for source in self.__sources:
source._stage(directory)
......@@ -1390,6 +1394,19 @@ class Element(Plugin):
# Ensure deterministic owners of sources at build time
utils._set_deterministic_user(directory)
# _get_strict()
#
# Convenience method to check strict build plan, since
# the element carries it's project reference
#
# Returns:
# (bool): Whether the build plan is strict for this element
#
def _get_strict(self):
project = self.get_project()
context = self.get_context()
return context._get_strict(project.name)
#############################################################
# Private Local Methods #
#############################################################
......@@ -1397,7 +1414,7 @@ class Element(Plugin):
def __sandbox(self, directory, stdout=None, stderr=None):
context = self.get_context()
project = self.get_project()
platform = Platform.get_platform(context, project)
platform = Platform.get_platform()
if directory is not None and os.path.exists(directory):
sandbox = platform.create_sandbox(context, project,
......
......@@ -30,7 +30,11 @@ _last_exception = None
def _get_last_exception():
return _last_exception
global _last_exception
le = _last_exception
_last_exception = None
return le
# BstError is an internal base exception class for BuildSream
......
#!/usr/bin/env python3
#
# Copyright (C) 2017 Codethink Limited
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library. If not, see <http://www.gnu.org/licenses/>.
#
# Authors:
# Jonathan Maw <jonathan.maw@codethink.co.uk>
"""Dpkg build element
A :mod:`BuildElement <buildstream.buildelement>` implementation for using
dpkg elements
Default Configuration
~~~~~~~~~~~~~~~~~~~~~
The dpkg default configuration:
.. literalinclude:: ../../../buildstream/plugins/elements/dpkg_build.yaml
:language: yaml
Public data
~~~~~~~~~~~
This plugin writes to an element's public data.
split-rules
-----------
This plugin overwrites the element's split-rules with a list of its own
creation, creating a split domain for every package that it detected.
e.g.
.. code:: yaml
public:
split-rules:
foo:
- /sbin/foo
- /usr/bin/bar
bar:
- /etc/quux
dpkg-data
---------
control
'''''''
The control file will be written as raw text into the control field.
e.g.
.. code:: yaml
public:
dpkg-data:
foo:
control: |
Source: foo
Section: blah
Build-depends: bar (>= 1337), baz
...
name
''''
The name of the plugin will be written to the name field.
e.g.
.. code:: yaml
public:
dpkg-data:
foo:
name: foobar
package-scripts
---------------
preinst, postinst, prerm and postrm scripts may be written to the
package if they are detected. They are written as raw text. e.g.
.. code:: yaml
public:
package-scripts:
foo:
preinst: |
#!/usr/bin/bash
/sbin/ldconfig
bar:
postinst: |
#!/usr/bin/bash
/usr/share/fonts/generate_fonts.sh
"""
import filecmp
import os
import re
from buildstream import BuildElement, utils
# Element implementation for the 'dpkg' kind.
class DpkgElement(BuildElement):
def _get_packages(self, sandbox):
controlfile = os.path.join("debian", "control")
controlpath = os.path.join(
sandbox.get_directory(),
self.get_variable('build-root').lstrip(os.sep),
controlfile
)
with open(controlpath) as f:
return re.findall(r"Package:\s*(.+)\n", f.read())
def configure(self, node):
# __original_commands is needed for cache-key generation,
# as commands can be altered during builds and invalidate the key
super().configure(node)
self.__original_commands = dict(self.commands)
def get_unique_key(self):
key = super().get_unique_key()
# Overriding because we change self.commands mid-build, making it
# unsuitable to be included in the cache key.
for domain, cmds in self.__original_commands.items():
key[domain] = cmds
return key
def assemble(self, sandbox):
# Replace <PACKAGES> if no variable was set
packages = self._get_packages(sandbox)
self.commands = dict([
(group, [
c.replace("<PACKAGES>", " ".join(packages)) for c in commands
])
for group, commands in self.commands.items()
])
collectdir = super().assemble(sandbox)
bad_overlaps = set()
new_split_rules = {}
new_dpkg_data = {}
new_package_scripts = {}
for package in packages:
package_path = os.path.join(sandbox.get_directory(),
self.get_variable('build-root').lstrip(os.sep),
'debian', package)
# Exclude DEBIAN files because they're pulled in as public metadata
contents = [x for x in utils.list_relative_paths(package_path)
if x != "." and not x.startswith("DEBIAN")]
new_split_rules[package] = contents
# Check for any overlapping files that are different.
# Since we're storing all these files together, we need to warn
# because clobbering is bad!
for content_file in contents:
for split_package, split_contents in new_split_rules.items():
for split_file in split_contents:
content_file_path = os.path.join(package_path,
content_file.lstrip(os.sep))
split_file_path = os.path.join(os.path.dirname(package_path),
split_package,
split_file.lstrip(os.sep))
if (content_file == split_file and
os.path.isfile(content_file_path) and
not filecmp.cmp(content_file_path, split_file_path)):
bad_overlaps.add(content_file)
# Store /DEBIAN metadata for each package.
# DEBIAN/control goes into bst.dpkg-data.<package>.control
controlpath = os.path.join(package_path, "DEBIAN", "control")
if not os.path.exists(controlpath):
self.error("{}: package {} doesn't have a DEBIAN/control in {}!"
.format(self.name, package, package_path))
with open(controlpath, "r") as f:
controldata = f.read()
new_dpkg_data[package] = {"control": controldata, "name": package}
# DEBIAN/{pre,post}{inst,rm} scripts go into bst.package-scripts.<package>.<script>
scriptfiles = ["preinst", "postinst", "prerm", "postrm"]
for s in scriptfiles:
path = os.path.join(package_path, "DEBIAN", s)
if os.path.exists(path):
if package not in new_package_scripts:
new_package_scripts[package] = {}
with open(path, "r") as f:
data = f.read()
new_package_scripts[package][s] = data
bstdata = self.get_public_data("bst")
bstdata["split-rules"] = new_split_rules
bstdata["dpkg-data"] = new_dpkg_data
if new_package_scripts:
bstdata["package-scripts"] = new_package_scripts
self.set_public_data("bst", bstdata)
if bad_overlaps:
self.warn("Destructive overlaps found in some files!", "\n".join(bad_overlaps))
return collectdir
# Plugin entry point
def setup():
return DpkgElement
# Dpkg default configurations
variables:
rulesfile: "debian/rules"
build: "%{rulesfile} build"
binary: "env DH_OPTIONS='--destdir=.' %{rulesfile} binary"
# packages' default value will be automatically replaced with
# defaults calculated from debian/control. Replace this with a
# space-separated list of packages to have more control over
# what gets generated.
#
# e.g.
# packages: "foo foo-dev foo-doc"
#
packages: <PACKAGES>
install-packages: |
for pkg in %{packages}; do
cp -a debian/${pkg}/* %{install-root}
done
clear-debian: |
rm -r %{install-root}/DEBIAN
patch: |
if grep -q "3.0 (quilt)" debian/source/format; then
quilt push -a
fi
# Set this if the sources cannot handle parallelization.
#
# notparallel: True
config:
# Commands for configuring the software
#
configure-commands:
- |
%{patch}
# Commands for building the software
#
build-commands:
- |
%{build}
- |
%{binary}
# Commands for installing the software into a
# destination folder
#
install-commands:
- |
%{install-packages}
- |
%{clear-debian}
# Commands for stripping debugging information out of
# installed binaries
#
strip-commands:
- |
%{strip-binaries}
# Use max-jobs CPUs for building and enable verbosity
environment:
MAKEFLAGS: -j%{max-jobs}
V: 1
DH_VERBOSE: 1
QUILT_PATCHES: debian/patches
# And dont consider MAKEFLAGS or V as something which may
# effect build output.
environment-nocache:
- MAKEFLAGS
- V
- DH_VERBOSE