Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • willsalmon/buildstream
  • CumHoleZH/buildstream
  • tchaik/buildstream
  • DCotyPortfolio/buildstream
  • jesusoctavioas/buildstream
  • patrickmmartin/buildstream
  • franred/buildstream
  • tintou/buildstream
  • alatiera/buildstream
  • martinblanchard/buildstream
  • neverdie22042524/buildstream
  • Mattlk13/buildstream
  • PServers/buildstream
  • phamnghia610909/buildstream
  • chiaratolentino/buildstream
  • eysz7-x-x/buildstream
  • kerrick1/buildstream
  • matthew-yates/buildstream
  • twofeathers/buildstream
  • mhadjimichael/buildstream
  • pointswaves/buildstream
  • Mr.JackWilson/buildstream
  • Tw3akG33k/buildstream
  • AlexFazakas/buildstream
  • eruidfkiy/buildstream
  • clamotion2/buildstream
  • nanonyme/buildstream
  • wickyjaaa/buildstream
  • nmanchev/buildstream
  • bojorquez.ja/buildstream
  • mostynb/buildstream
  • highpit74/buildstream
  • Demo112/buildstream
  • ba2014sheer/buildstream
  • tonimadrino/buildstream
  • usuario2o/buildstream
  • Angelika123456/buildstream
  • neo355/buildstream
  • corentin-ferlay/buildstream
  • coldtom/buildstream
  • wifitvbox81/buildstream
  • 358253885/buildstream
  • seanborg/buildstream
  • SotK/buildstream
  • DouglasWinship/buildstream
  • karansthr97/buildstream
  • louib/buildstream
  • bwh-ct/buildstream
  • robjh/buildstream
  • we88c0de/buildstream
  • zhengxian5555/buildstream
51 results
Show changes
Commits on Source (50)
Showing
with 964 additions and 296 deletions
......@@ -79,32 +79,46 @@ source_dist:
- cd ../..
- mkdir -p coverage-linux/
- cp dist/buildstream/.coverage coverage-linux/coverage."${CI_JOB_NAME}"
except:
- schedules
artifacts:
paths:
- coverage-linux/
tests-debian-9:
image: buildstream/testsuite-debian:9-master-119-552f5fc6
image: buildstream/testsuite-debian:9-master-123-7ce6581b
<<: *linux-tests
except:
- schedules
tests-fedora-27:
image: buildstream/testsuite-fedora:27-master-119-552f5fc6
image: buildstream/testsuite-fedora:27-master-123-7ce6581b
<<: *linux-tests
except:
- schedules
tests-fedora-28:
image: buildstream/testsuite-fedora:28-master-119-552f5fc6
image: buildstream/testsuite-fedora:28-master-123-7ce6581b
<<: *linux-tests
except:
- schedules
tests-ubuntu-18.04:
image: buildstream/testsuite-ubuntu:18.04-master-119-552f5fc6
image: buildstream/testsuite-ubuntu:18.04-master-123-7ce6581b
<<: *linux-tests
except:
- schedules
overnight-fedora-28-aarch64:
image: buildstream/testsuite-fedora:aarch64-28-master-123-7ce6581b
tags:
- aarch64
<<: *linux-tests
only:
- schedules
tests-unix:
# Use fedora here, to a) run a test on fedora and b) ensure that we
# can get rid of ostree - this is not possible with debian-8
image: buildstream/testsuite-fedora:27-master-119-552f5fc6
image: buildstream/testsuite-fedora:27-master-123-7ce6581b
stage: test
variables:
BST_FORCE_BACKEND: "unix"
......
......@@ -1547,6 +1547,24 @@ Tests that run a sandbox should be decorated with::
and use the integration cli helper.
You should first aim to write tests that exercise your changes from the cli.
This is so that the testing is end-to-end, and the changes are guaranteed to
work for the end-user. The cli is considered stable, and so tests written in
terms of it are unlikely to require updating as the internals of the software
change over time.
It may be impractical to sufficiently examine some changes this way. For
example, the number of cases to test and the running time of each test may be
too high. It may also be difficult to contrive circumstances to cover every
line of the change. If this is the case, next you can consider also writing
unit tests that work more directly on the changes.
It is important to write unit tests in such a way that they do not break due to
changes unrelated to what they are meant to test. For example, if the test
relies on a lot of BuildStream internals, a large refactoring will likely
require the test to be rewritten. Pure functions that only rely on the Python
Standard Library are excellent candidates for unit testing.
Measuring performance
---------------------
......
......@@ -38,13 +38,23 @@ buildstream 1.3.1
a bug fix to workspaces so they can be build in workspaces too.
o Creating a build shell through the interactive mode or `bst shell --build`
will now use the cached build tree. It is now easier to debug local build
failures.
will now use the cached build tree if available locally. It is now easier to
debug local build failures.
o `bst shell --sysroot` now takes any directory that contains a sysroot,
instead of just a specially-formatted build-root with a `root` and `scratch`
subdirectory.
o Due to the element `build tree` being cached in the respective artifact their
size in some cases has significantly increased. In *most* cases the build trees
are not utilised when building targets, as such by default bst 'pull' & 'build'
will not fetch build trees from remotes. This behaviour can be overridden with
the cli main option '--pull-buildtrees', or the user configuration cache group
option 'pull-buildtrees = True'. The override will also add the build tree to
already cached artifacts. When attempting to populate an artifactcache server
with cached artifacts, only 'complete' elements can be pushed. If the element
is expected to have a populated build tree then it must be cached before pushing.
=================
buildstream 1.1.5
......
......@@ -476,6 +476,22 @@ class ArtifactCache():
return self.cas.contains(ref)
# contains_subdir_artifact():
#
# Check whether an artifact element contains a digest for a subdir
# which is populated in the cache, i.e non dangling.
#
# Args:
# element (Element): The Element to check
# key (str): The cache key to use
# subdir (str): The subdir to check
#
# Returns: True if the subdir exists & is populated in the cache, False otherwise
#
def contains_subdir_artifact(self, element, key, subdir):
ref = self.get_artifact_fullname(element, key)
return self.cas.contains_subdir_artifact(ref, subdir)
# list_artifacts():
#
# List artifacts in this cache in LRU order.
......@@ -533,6 +549,7 @@ class ArtifactCache():
# Args:
# element (Element): The Element to extract
# key (str): The cache key to use
# subdir (str): Optional specific subdir to extract
#
# Raises:
# ArtifactError: In cases there was an OSError, or if the artifact
......@@ -540,12 +557,12 @@ class ArtifactCache():
#
# Returns: path to extracted artifact
#
def extract(self, element, key):
def extract(self, element, key, subdir=None):
ref = self.get_artifact_fullname(element, key)
path = os.path.join(self.extractdir, element._get_project().name, element.normal_name)
return self.cas.extract(ref, path)
return self.cas.extract(ref, path, subdir=subdir)
# commit():
#
......@@ -666,11 +683,13 @@ class ArtifactCache():
# element (Element): The Element whose artifact is to be fetched
# key (str): The cache key to use
# progress (callable): The progress callback, if any
# subdir (str): The optional specific subdir to pull
# excluded_subdirs (list): The optional list of subdirs to not pull
#
# Returns:
# (bool): True if pull was successful, False if artifact was not available
#
def pull(self, element, key, *, progress=None):
def pull(self, element, key, *, progress=None, subdir=None, excluded_subdirs=None):
ref = self.get_artifact_fullname(element, key)
project = element._get_project()
......@@ -680,8 +699,13 @@ class ArtifactCache():
display_key = element._get_brief_display_key()
element.status("Pulling artifact {} <- {}".format(display_key, remote.spec.url))
if self.cas.pull(ref, remote, progress=progress):
if self.cas.pull(ref, remote, progress=progress, subdir=subdir, excluded_subdirs=excluded_subdirs):
element.info("Pulled artifact {} <- {}".format(display_key, remote.spec.url))
if subdir:
# Attempt to extract subdir into artifact extract dir if it already exists
# without containing the subdir. If the respective artifact extract dir does not
# exist a complete extraction will complete.
self.extract(element, key, subdir)
# no need to pull from additional remotes
return True
else:
......
......@@ -24,7 +24,6 @@ import os
import stat
import tempfile
import uuid
import errno
from urllib.parse import urlparse
import grpc
......@@ -82,6 +81,27 @@ class CASCache():
# This assumes that the repository doesn't have any dangling pointers
return os.path.exists(refpath)
# contains_subdir_artifact():
#
# Check whether the specified artifact element tree has a digest for a subdir
# which is populated in the cache, i.e non dangling.
#
# Args:
# ref (str): The ref to check
# subdir (str): The subdir to check
#
# Returns: True if the subdir exists & is populated in the cache, False otherwise
#
def contains_subdir_artifact(self, ref, subdir):
tree = self.resolve_ref(ref)
# This assumes that the subdir digest is present in the element tree
subdirdigest = self._get_subdir(tree, subdir)
objpath = self.objpath(subdirdigest)
# True if subdir content is cached or if empty as expected
return os.path.exists(objpath)
# extract():
#
# Extract cached directory for the specified ref if it hasn't
......@@ -90,37 +110,44 @@ class CASCache():
# Args:
# ref (str): The ref whose directory to extract
# path (str): The destination path
# subdir (str): Optional specific dir to extract
#
# Raises:
# CASError: In cases there was an OSError, or if the ref did not exist.
#
# Returns: path to extracted directory
#
def extract(self, ref, path):
def extract(self, ref, path, subdir=None):
tree = self.resolve_ref(ref, update_mtime=True)
dest = os.path.join(path, tree.hash)
originaldest = dest = os.path.join(path, tree.hash)
# If artifact is already extracted, check if the optional subdir
# has also been extracted. If the artifact has not been extracted
# a full extraction would include the optional subdir
if os.path.isdir(dest):
# directory has already been extracted
return dest
if subdir:
if not os.path.isdir(os.path.join(dest, subdir)):
dest = os.path.join(dest, subdir)
tree = self._get_subdir(tree, subdir)
else:
return dest
else:
return dest
with tempfile.TemporaryDirectory(prefix='tmp', dir=self.tmpdir) as tmpdir:
checkoutdir = os.path.join(tmpdir, ref)
self._checkout(checkoutdir, tree)
os.makedirs(os.path.dirname(dest), exist_ok=True)
try:
os.rename(checkoutdir, dest)
utils.move_atomic(checkoutdir, dest)
except utils.DirectoryExistsError:
# Another process beat us to rename
pass
except OSError as e:
# With rename it's possible to get either ENOTEMPTY or EEXIST
# in the case that the destination path is a not empty directory.
#
# If rename fails with these errors, another process beat
# us to it so just ignore.
if e.errno not in [errno.ENOTEMPTY, errno.EEXIST]:
raise CASError("Failed to extract directory for ref '{}': {}".format(ref, e)) from e
raise CASError("Failed to extract directory for ref '{}': {}".format(ref, e)) from e
return dest
return originaldest
# commit():
#
......@@ -193,11 +220,13 @@ class CASCache():
# ref (str): The ref to pull
# remote (CASRemote): The remote repository to pull from
# progress (callable): The progress callback, if any
# subdir (str): The optional specific subdir to pull
# excluded_subdirs (list): The optional list of subdirs to not pull
#
# Returns:
# (bool): True if pull was successful, False if ref was not available
#
def pull(self, ref, remote, *, progress=None):
def pull(self, ref, remote, *, progress=None, subdir=None, excluded_subdirs=None):
try:
remote.init()
......@@ -209,7 +238,12 @@ class CASCache():
tree.hash = response.digest.hash
tree.size_bytes = response.digest.size_bytes
self._fetch_directory(remote, tree)
# Check if the element artifact is present, if so just fetch the subdir.
if subdir and os.path.exists(self.objpath(tree)):
self._fetch_subdir(remote, tree, subdir)
else:
# Fetch artifact, excluded_subdirs determined in pullqueue
self._fetch_directory(remote, tree, excluded_subdirs=excluded_subdirs)
self.set_ref(ref, tree)
......@@ -607,8 +641,10 @@ class CASCache():
stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH)
for dirnode in directory.directories:
fullpath = os.path.join(dest, dirnode.name)
self._checkout(fullpath, dirnode.digest)
# Don't try to checkout a dangling ref
if os.path.exists(self.objpath(dirnode.digest)):
fullpath = os.path.join(dest, dirnode.name)
self._checkout(fullpath, dirnode.digest)
for symlinknode in directory.symlinks:
# symlink
......@@ -863,11 +899,14 @@ class CASCache():
# Args:
# remote (Remote): The remote to use.
# dir_digest (Digest): Digest object for the directory to fetch.
# excluded_subdirs (list): The optional list of subdirs to not fetch
#
def _fetch_directory(self, remote, dir_digest):
def _fetch_directory(self, remote, dir_digest, *, excluded_subdirs=None):
fetch_queue = [dir_digest]
fetch_next_queue = []
batch = _CASBatchRead(remote)
if not excluded_subdirs:
excluded_subdirs = []
while len(fetch_queue) + len(fetch_next_queue) > 0:
if not fetch_queue:
......@@ -882,8 +921,9 @@ class CASCache():
directory.ParseFromString(f.read())
for dirnode in directory.directories:
batch = self._fetch_directory_node(remote, dirnode.digest, batch,
fetch_queue, fetch_next_queue, recursive=True)
if dirnode.name not in excluded_subdirs:
batch = self._fetch_directory_node(remote, dirnode.digest, batch,
fetch_queue, fetch_next_queue, recursive=True)
for filenode in directory.files:
batch = self._fetch_directory_node(remote, filenode.digest, batch,
......@@ -892,6 +932,10 @@ class CASCache():
# Fetch final batch
self._fetch_directory_batch(remote, batch, fetch_queue, fetch_next_queue)
def _fetch_subdir(self, remote, tree, subdir):
subdirdigest = self._get_subdir(tree, subdir)
self._fetch_directory(remote, subdirdigest)
def _fetch_tree(self, remote, digest):
# download but do not store the Tree object
with tempfile.NamedTemporaryFile(dir=self.tmpdir) as out:
......
......@@ -31,7 +31,7 @@ from ._exceptions import LoadError, LoadErrorReason, BstError
from ._message import Message, MessageType
from ._profile import Topics, profile_start, profile_end
from ._artifactcache import ArtifactCache
from ._workspaces import Workspaces
from ._workspaces import Workspaces, WorkspaceLocals
from .plugin import _plugin_lookup
......@@ -48,7 +48,7 @@ from .plugin import _plugin_lookup
#
class Context():
def __init__(self):
def __init__(self, workspace_locals=None):
# Filename indicating which configuration file was used, or None for the defaults
self.config_origin = None
......@@ -104,6 +104,9 @@ class Context():
# What to do when a build fails in non interactive mode
self.sched_error_action = 'continue'
# Whether or not to attempt to pull build trees globally
self.pull_buildtrees = None
# Whether elements must be rebuilt when their dependencies have changed
self._strict_build_plan = None
......@@ -118,6 +121,7 @@ class Context():
self._projects = []
self._project_overrides = {}
self._workspaces = None
self._workspace_locals = workspace_locals or WorkspaceLocals()
self._log_handle = None
self._log_filename = None
self.config_cache_quota = 'infinity'
......@@ -178,13 +182,16 @@ class Context():
# our artifactdir - the artifactdir may not have been created
# yet.
cache = _yaml.node_get(defaults, Mapping, 'cache')
_yaml.node_validate(cache, ['quota'])
_yaml.node_validate(cache, ['quota', 'pull-buildtrees'])
self.config_cache_quota = _yaml.node_get(cache, str, 'quota', default_value='infinity')
# Load artifact share configuration
self.artifact_cache_specs = ArtifactCache.specs_from_config_node(defaults)
# Load pull build trees configuration
self.pull_buildtrees = _yaml.node_get(cache, bool, 'pull-buildtrees')
# Load logging config
logging = _yaml.node_get(defaults, Mapping, 'logging')
_yaml.node_validate(logging, [
......@@ -272,6 +279,9 @@ class Context():
def get_workspaces(self):
return self._workspaces
def get_workspace_locals(self):
return self._workspace_locals
# get_overrides():
#
# Fetch the override dictionary for the active project. This returns
......
......@@ -39,6 +39,7 @@ from .._stream import Stream
from .._versions import BST_FORMAT_VERSION
from .. import _yaml
from .._scheduler import ElementJob
from .._workspaces import WorkspaceLocals
# Import frontend assets
from . import Profile, LogLine, Status
......@@ -79,6 +80,7 @@ class App():
self._fail_messages = {} # Failure messages by unique plugin id
self._interactive_failures = None # Whether to handle failures interactively
self._started = False # Whether a session has started
self._workspace_locals = WorkspaceLocals() # A collection of workspace local data
# UI Colors Profiles
self._content_profile = Profile(fg='yellow')
......@@ -164,7 +166,7 @@ class App():
# Load the Context
#
try:
self.context = Context()
self.context = Context(self._workspace_locals)
self.context.load(config)
except BstError as e:
self._error_exit(e, "Error loading user configuration")
......@@ -182,7 +184,8 @@ class App():
'fetchers': 'sched_fetchers',
'builders': 'sched_builders',
'pushers': 'sched_pushers',
'network_retries': 'sched_network_retries'
'network_retries': 'sched_network_retries',
'pull_buildtrees': 'pull_buildtrees'
}
for cli_option, context_attr in override_map.items():
option_value = self._main_options.get(cli_option)
......@@ -400,6 +403,21 @@ class App():
if self.stream:
self.stream.cleanup()
# guess_element()
#
# Attempts to interpret which element the user intended to run commands on
#
# Returns:
# (str) The name of the element, or an empty string
def guess_element(self):
directory = self._main_options['directory']
local = self._workspace_locals.get(directory)
if local.has_projects():
return local.get_default_element()
else:
return ""
############################################################
# Abstract Class Methods #
############################################################
......
......@@ -59,18 +59,9 @@ def complete_target(args, incomplete):
:return: all the possible user-specified completions for the param
"""
from .. import utils
project_conf = 'project.conf'
def ensure_project_dir(directory):
directory = os.path.abspath(directory)
while not os.path.isfile(os.path.join(directory, project_conf)):
parent_dir = os.path.dirname(directory)
if directory == parent_dir:
break
directory = parent_dir
return directory
# First resolve the directory, in case there is an
# active --directory/-C option
#
......@@ -89,7 +80,7 @@ def complete_target(args, incomplete):
else:
# Check if this directory or any of its parent directories
# contain a project config file
base_directory = ensure_project_dir(base_directory)
base_directory = utils._search_upward_for_file(base_directory, project_conf)
# Now parse the project.conf just to find the element path,
# this is unfortunately a bit heavy.
......@@ -219,6 +210,8 @@ def print_version(ctx, param, value):
help="Specify a project option")
@click.option('--default-mirror', default=None,
help="The mirror to fetch from first, before attempting other mirrors")
@click.option('--pull-buildtrees', is_flag=True, default=None,
help="Include an element's build tree when pulling remote element artifacts")
@click.pass_context
def cli(context, **kwargs):
"""Build and manipulate BuildStream projects
......@@ -319,6 +312,12 @@ def build(app, elements, all_, track_, track_save, track_all, track_except, trac
if track_save:
click.echo("WARNING: --track-save is deprecated, saving is now unconditional", err=True)
if not all_ and not elements:
# Attempt to divine the element from the workspace you're in
guessed_target = app.guess_element()
if guessed_target:
elements = (guessed_target,)
if track_all:
track_ = elements
......@@ -373,6 +372,12 @@ def fetch(app, elements, deps, track_, except_, track_cross_junctions):
"Since tracking modifies the build plan, all elements will be tracked.", err=True)
deps = PipelineSelection.ALL
if not elements:
# Attempt to divine the element from the workspace you're in
guessed_target = app.guess_element()
if guessed_target:
elements = (guessed_target,)
with app.initialized(session_name="Fetch"):
app.stream.fetch(elements,
selection=deps,
......@@ -409,6 +414,12 @@ def track(app, elements, deps, except_, cross_junctions):
none: No dependencies, just the specified elements
all: All dependencies of all specified elements
"""
if not elements:
# Attempt to divine the element from the workspace you're in
guessed_target = app.guess_element()
if guessed_target:
elements = (guessed_target,)
with app.initialized(session_name="Track"):
# Substitute 'none' for 'redirect' so that element redirections
# will be done
......@@ -445,6 +456,12 @@ def pull(app, elements, deps, remote):
none: No dependencies, just the element itself
all: All dependencies
"""
if not elements:
# Attempt to divine the element from the workspace you're in
guessed_target = app.guess_element()
if guessed_target:
elements = (guessed_target,)
with app.initialized(session_name="Pull"):
app.stream.pull(elements, selection=deps, remote=remote)
......@@ -473,6 +490,11 @@ def push(app, elements, deps, remote):
none: No dependencies, just the element itself
all: All dependencies
"""
if not elements:
# Attempt to divine the element from the workspace you're in
guessed_target = app.guess_element()
if guessed_target:
elements = (guessed_target,)
with app.initialized(session_name="Push"):
app.stream.push(elements, selection=deps, remote=remote)
......@@ -543,6 +565,12 @@ def show(app, elements, deps, except_, order, format_):
bst show target.bst --format \\
$'---------- %{name} ----------\\n%{vars}'
"""
if not elements:
# Attempt to divine the element from the workspace you're in
guessed_target = app.guess_element()
if guessed_target:
elements = (guessed_target,)
with app.initialized():
dependencies = app.stream.load_selection(elements,
selection=deps,
......@@ -572,7 +600,7 @@ def show(app, elements, deps, except_, order, format_):
help="Mount a file or directory into the sandbox")
@click.option('--isolate', is_flag=True, default=False,
help='Create an isolated build sandbox')
@click.argument('element',
@click.argument('element', required=False,
type=click.Path(readable=False))
@click.argument('command', type=click.STRING, nargs=-1)
@click.pass_obj
......@@ -603,6 +631,14 @@ def shell(app, element, sysroot, mount, isolate, build_, command):
scope = Scope.RUN
with app.initialized():
if not element:
# Attempt to divine the element from the workspace you're in
guessed_target = app.guess_element()
if guessed_target:
element = guessed_target
else:
raise AppError('Error: Missing argument "ELEMENT".')
dependencies = app.stream.load_selection((element,), selection=PipelineSelection.NONE)
element = dependencies[0]
prompt = app.shell_prompt(element)
......@@ -640,14 +676,27 @@ def shell(app, element, sysroot, mount, isolate, build_, command):
help="Create a tarball from the artifact contents instead "
"of a file tree. If LOCATION is '-', the tarball "
"will be dumped to the standard output.")
@click.argument('element',
@click.argument('element', required=False,
type=click.Path(readable=False))
@click.argument('location', type=click.Path())
@click.argument('location', type=click.Path(), required=False)
@click.pass_obj
def checkout(app, element, location, force, deps, integrate, hardlinks, tar):
"""Checkout a built artifact to the specified location
"""
if not element and not location:
click.echo("ERROR: LOCATION is not specified", err=True)
sys.exit(-1)
if element and not location:
# Nasty hack to get around click's optional args
location = element
element = app.guess_element()
if not element:
click.echo("ERROR: ELEMENT is not specified", err=True)
sys.exit(-1)
if hardlinks and tar:
click.echo("ERROR: options --hardlinks and --tar conflict", err=True)
sys.exit(-1)
......@@ -662,6 +711,33 @@ def checkout(app, element, location, force, deps, integrate, hardlinks, tar):
tar=tar)
##################################################################
# Source Checkout Command #
##################################################################
@cli.command(name='source-checkout', short_help='Checkout sources for an element')
@click.option('--except', 'except_', multiple=True,
type=click.Path(readable=False),
help="Except certain dependencies")
@click.option('--deps', '-d', default='none',
type=click.Choice(['build', 'none', 'run', 'all']),
help='The dependencies whose sources to checkout (default: none)')
@click.option('--fetch', 'fetch_', default=False, is_flag=True,
help='Fetch elements if they are not fetched')
@click.argument('element',
type=click.Path(readable=False))
@click.argument('location', type=click.Path())
@click.pass_obj
def source_checkout(app, element, location, deps, fetch_, except_):
"""Checkout sources of an element to the specified location
"""
with app.initialized():
app.stream.source_checkout(element,
location=location,
deps=deps,
fetch=fetch_,
except_targets=except_)
##################################################################
# Workspace Command #
##################################################################
......@@ -713,15 +789,23 @@ def workspace_open(app, no_checkout, force, track_, element, directory):
help="Remove the path that contains the closed workspace")
@click.option('--all', '-a', 'all_', default=False, is_flag=True,
help="Close all open workspaces")
@click.option('--force', '-f', default=False, is_flag=True,
help="Always close the workspace and/or delete your changes")
@click.argument('elements', nargs=-1,
type=click.Path(readable=False))
@click.pass_obj
def workspace_close(app, remove_dir, all_, elements):
def workspace_close(app, remove_dir, all_, force, elements):
"""Close a workspace"""
if not (all_ or elements):
click.echo('ERROR: no elements specified', err=True)
sys.exit(-1)
# NOTE: I may need to revisit this when implementing multiple projects
# opening one workspace.
element = app.guess_element()
if element:
elements = (element,)
else:
click.echo('ERROR: no elements specified', err=True)
sys.exit(-1)
with app.initialized():
......@@ -735,15 +819,25 @@ def workspace_close(app, remove_dir, all_, elements):
elements = app.stream.redirect_element_names(elements)
# Check that the workspaces in question exist
# Check that the workspaces in question exist, and that it's safe to
# remove them.
nonexisting = []
for element_name in elements:
if not app.stream.workspace_exists(element_name):
nonexisting.append(element_name)
if app.stream.workspace_is_required(element_name):
if app.interactive:
click.echo("Removing '{}' will prevent you from running buildstream commands".format(element_name))
if not click.confirm('Are you sure you want to close this workspace?'):
click.echo('Aborting', err=True)
sys.exit(-1)
elif not force:
raise AppError("Cannot close workspaces. Workspace {} is being used to load the project"
.format(element_name), reason='closing-required-workspace')
if nonexisting:
raise AppError("Workspace does not exist", detail="\n".join(nonexisting))
if app.interactive and remove_dir:
if app.interactive and remove_dir and not force:
if not click.confirm('This will remove all your changes, are you sure?'):
click.echo('Aborting', err=True)
sys.exit(-1)
......@@ -772,7 +866,11 @@ def workspace_reset(app, soft, track_, all_, elements):
with app.initialized():
if not (all_ or elements):
raise AppError('No elements specified to reset')
element = app.guess_element()
if element:
elements = (element,)
else:
raise AppError('No elements specified to reset')
if all_ and not app.stream.workspace_exists():
raise AppError("No open workspaces to reset")
......@@ -816,13 +914,18 @@ def workspace_list(app):
help="Overwrite an existing tarball")
@click.option('--directory', default=os.getcwd(),
help="The directory to write the tarball to")
@click.argument('element',
@click.argument('element', required=False,
type=click.Path(readable=False))
@click.pass_obj
def source_bundle(app, element, force, directory,
track_, compression, except_):
"""Produce a source bundle to be manually executed
"""
if not element:
element = app.guess_element()
if not element:
click.echo("ERROR: ELEMENT is not specified", err=True)
sys.exit(-1)
with app.initialized():
app.stream.source_bundle(element, directory,
track_first=track_,
......
......@@ -370,7 +370,7 @@ class Pipeline():
detail += " Element: {} is inconsistent\n".format(element._get_full_name())
for source in element.sources():
if source._get_consistency() == Consistency.INCONSISTENT:
detail += " Source {} is missing ref\n".format(source)
detail += " {} is missing ref\n".format(source)
detail += '\n'
detail += "Try tracking these elements first with `bst track`\n"
......@@ -383,6 +383,33 @@ class Pipeline():
detail += " " + element._get_full_name() + "\n"
raise PipelineError("Inconsistent pipeline", detail=detail, reason="inconsistent-pipeline-workspaced")
# assert_sources_cached()
#
# Asserts that sources for the given list of elements are cached.
#
# Args:
# elements (list): The list of elements
#
def assert_sources_cached(self, elements):
uncached = []
with self._context.timed_activity("Checking sources"):
for element in elements:
if element._get_consistency() != Consistency.CACHED:
uncached.append(element)
if uncached:
detail = "Sources are not cached for the following elements:\n\n"
for element in uncached:
detail += " Following sources for element: {} are not cached:\n".format(element._get_full_name())
for source in element.sources():
if source._get_consistency() != Consistency.CACHED:
detail += " {}\n".format(source)
detail += '\n'
detail += "Try fetching these elements first with `bst fetch`,\n" + \
"or run this command with `--fetch` option\n"
raise PipelineError("Uncached sources", detail=detail, reason="uncached-sources")
#############################################################
# Private Methods #
#############################################################
......
......@@ -18,9 +18,9 @@
# Tristan Maat <tristan.maat@codethink.co.uk>
import os
import shutil
import subprocess
from .. import _site
from .. import utils
from ..sandbox import SandboxDummy
......@@ -38,16 +38,18 @@ class Linux(Platform):
self._have_fuse = os.path.exists("/dev/fuse")
bwrap_version = self._get_bwrap_version()
bwrap_version = _site.get_bwrap_version()
if bwrap_version is None:
self._bwrap_exists = False
self._have_good_bwrap = False
self._die_with_parent_available = False
self._json_status_available = False
else:
self._bwrap_exists = True
self._have_good_bwrap = (0, 1, 2) <= bwrap_version
self._die_with_parent_available = (0, 1, 8) <= bwrap_version
self._json_status_available = (0, 3, 2) <= bwrap_version
self._local_sandbox_available = self._have_fuse and self._have_good_bwrap
......@@ -97,6 +99,7 @@ class Linux(Platform):
# Inform the bubblewrap sandbox as to whether it can use user namespaces or not
kwargs['user_ns_available'] = self._user_ns_available
kwargs['die_with_parent_available'] = self._die_with_parent_available
kwargs['json_status_available'] = self._json_status_available
return SandboxBwrap(*args, **kwargs)
def _check_user_ns_available(self):
......@@ -119,21 +122,3 @@ class Linux(Platform):
output = ''
return output == 'root'
def _get_bwrap_version(self):
# Get the current bwrap version
#
# returns None if no bwrap was found
# otherwise returns a tuple of 3 int: major, minor, patch
bwrap_path = shutil.which('bwrap')
if not bwrap_path:
return None
cmd = [bwrap_path, "--version"]
try:
version = str(subprocess.check_output(cmd).split()[1], "utf-8")
except subprocess.CalledProcessError:
return None
return tuple(int(x) for x in version.split("."))
......@@ -94,8 +94,10 @@ class Project():
# The project name
self.name = None
# The project directory
self.directory = self._ensure_project_dir(directory)
self._context = context # The invocation Context, a private member
# The project directory, and whether the project was found from an external workspace
self.directory, self._required_workspace_element = self._find_project_dir(directory)
# Absolute path to where elements are loaded from within the project
self.element_path = None
......@@ -116,7 +118,6 @@ class Project():
#
# Private Members
#
self._context = context # The invocation Context
self._default_mirror = default_mirror # The name of the preferred mirror.
......@@ -370,6 +371,14 @@ class Project():
self._load_second_pass()
# required_workspace_element()
#
# Returns the element whose workspace is required to load this project,
# if any.
#
def required_workspace_element(self):
return self._required_workspace_element
# cleanup()
#
# Cleans up resources used loading elements
......@@ -651,7 +660,7 @@ class Project():
# Source url aliases
output._aliases = _yaml.node_get(config, Mapping, 'aliases', default_value={})
# _ensure_project_dir()
# _find_project_dir()
#
# Returns path of the project directory, if a configuration file is found
# in given directory or any of its parent directories.
......@@ -662,18 +671,26 @@ class Project():
# Raises:
# LoadError if project.conf is not found
#
def _ensure_project_dir(self, directory):
directory = os.path.abspath(directory)
while not os.path.isfile(os.path.join(directory, _PROJECT_CONF_FILE)):
parent_dir = os.path.dirname(directory)
if directory == parent_dir:
# Returns:
# (str) - the directory that contains the project, and
# (str) - the name of the element required to find the project, or an empty string
#
def _find_project_dir(self, directory):
workspace_element = ""
project_directory = utils._search_upward_for_file(directory, _PROJECT_CONF_FILE)
if not project_directory:
workspace_locals = self._context.get_workspace_locals()
workspace_local = workspace_locals.get(directory)
if workspace_local.has_projects():
project_directory = workspace_local.get_default_path()
workspace_element = workspace_local.get_default_element()
else:
raise LoadError(
LoadErrorReason.MISSING_PROJECT_CONF,
'{} not found in current directory or any of its parent directories'
.format(_PROJECT_CONF_FILE))
directory = parent_dir
return directory
return project_directory, workspace_element
def _load_plugin_factories(self, config, output):
plugin_source_origins = [] # Origins of custom sources
......
......@@ -18,6 +18,8 @@
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
import os
import shutil
import subprocess
#
# Private module declaring some info about where the buildstream
......@@ -44,3 +46,22 @@ build_all_template = os.path.join(root, 'data', 'build-all.sh.in')
# Module building script template
build_module_template = os.path.join(root, 'data', 'build-module.sh.in')
def get_bwrap_version():
# Get the current bwrap version
#
# returns None if no bwrap was found
# otherwise returns a tuple of 3 int: major, minor, patch
bwrap_path = shutil.which('bwrap')
if not bwrap_path:
return None
cmd = [bwrap_path, "--version"]
try:
version = str(subprocess.check_output(cmd).split()[1], "utf-8")
except subprocess.CalledProcessError:
return None
return tuple(int(x) for x in version.split("."))
......@@ -379,27 +379,7 @@ class Stream():
elements, _ = self._load((target,), (), fetch_subprojects=True)
target = elements[0]
if not tar:
try:
os.makedirs(location, exist_ok=True)
except OSError as e:
raise StreamError("Failed to create checkout directory: '{}'"
.format(e)) from e
if not tar:
if not os.access(location, os.W_OK):
raise StreamError("Checkout directory '{}' not writable"
.format(location))
if not force and os.listdir(location):
raise StreamError("Checkout directory '{}' not empty"
.format(location))
elif os.path.exists(location) and location != '-':
if not os.access(location, os.W_OK):
raise StreamError("Output file '{}' not writable"
.format(location))
if not force and os.path.exists(location):
raise StreamError("Output file '{}' already exists"
.format(location))
self._check_location_writable(location, force=force, tar=tar)
# Stage deps into a temporary sandbox first
try:
......@@ -443,6 +423,42 @@ class Stream():
raise StreamError("Error while staging dependencies into a sandbox"
": '{}'".format(e), detail=e.detail, reason=e.reason) from e
# source_checkout()
#
# Checkout sources of the target element to the specified location
#
# Args:
# target (str): The target element whose sources to checkout
# location (str): Location to checkout the sources to
# deps (str): The dependencies to checkout
# fetch (bool): Whether to fetch missing sources
# except_targets (list): List of targets to except from staging
#
def source_checkout(self, target, *,
location=None,
deps='none',
fetch=False,
except_targets=()):
self._check_location_writable(location)
elements, _ = self._load((target,), (),
selection=deps,
except_targets=except_targets,
fetch_subprojects=True)
# Assert all sources are cached
if fetch:
self._fetch(elements)
self._pipeline.assert_sources_cached(elements)
# Stage all sources determined by scope
try:
self._write_element_sources(location, elements)
except BstError as e:
raise StreamError("Error while writing sources"
": '{}'".format(e), detail=e.detail, reason=e.reason) from e
# workspace_open
#
# Open a project workspace
......@@ -516,6 +532,11 @@ class Stream():
with target.timed_activity("Staging sources to {}".format(directory)):
target._open_workspace()
workspace_locals = self._context.get_workspace_locals()
project = self._context.get_toplevel_project()
workspace_local = workspace_locals.add(directory, project.directory, target._get_full_name())
workspace_local.write()
workspaces.save_config()
self._message(MessageType.INFO, "Saved workspace configuration")
......@@ -540,6 +561,9 @@ class Stream():
except OSError as e:
raise StreamError("Could not remove '{}': {}"
.format(workspace.get_absolute_path(), e)) from e
else:
workspace_locals = self._context.get_workspace_locals()
workspace_locals.remove(workspace.get_absolute_path())
# Delete the workspace and save the configuration
workspaces.delete_workspace(element_name)
......@@ -583,6 +607,8 @@ class Stream():
for element in elements:
workspace = workspaces.get_workspace(element._get_full_name())
workspace_path = workspace.get_absolute_path()
workspace_locals = self._context.get_workspace_locals()
workspace_local = workspace_locals.get(workspace_path)
if soft:
workspace.prepared = False
self._message(MessageType.INFO, "Reset workspace state for {} at: {}"
......@@ -603,6 +629,8 @@ class Stream():
with element.timed_activity("Staging sources to {}".format(workspace_path)):
element._open_workspace()
workspace_local.write()
self._message(MessageType.INFO,
"Reset workspace for {} at: {}".format(element.name,
workspace_path))
......@@ -633,6 +661,20 @@ class Stream():
return False
# workspace_is_required()
#
# Checks whether the workspace belonging to element_name is required to
# load the project
#
# Args:
# element_name (str): The element whose workspace may be required
#
# Returns:
# (bool): True if the workspace is required
def workspace_is_required(self, element_name):
required_elm = self._project.required_workspace_element()
return required_elm == element_name
# workspace_list
#
# Serializes the workspaces and dumps them in YAML to stdout.
......@@ -726,7 +768,7 @@ class Stream():
if self._write_element_script(source_directory, element)
]
self._write_element_sources(tempdir, elements)
self._write_element_sources(os.path.join(tempdir, "source"), elements)
self._write_build_script(tempdir, elements)
self._collect_sources(tempdir, tar_location,
target.normal_name, compression)
......@@ -1068,6 +1110,39 @@ class Stream():
self._enqueue_plan(fetch_plan)
self._run()
# _check_location_writable()
#
# Check if given location is writable.
#
# Args:
# location (str): Destination path
# force (bool): Allow files to be overwritten
# tar (bool): Whether destination is a tarball
#
# Raises:
# (StreamError): If the destination is not writable
#
def _check_location_writable(self, location, force=False, tar=False):
if not tar:
try:
os.makedirs(location, exist_ok=True)
except OSError as e:
raise StreamError("Failed to create destination directory: '{}'"
.format(e)) from e
if not os.access(location, os.W_OK):
raise StreamError("Destination directory '{}' not writable"
.format(location))
if not force and os.listdir(location):
raise StreamError("Destination directory '{}' not empty"
.format(location))
elif os.path.exists(location) and location != '-':
if not os.access(location, os.W_OK):
raise StreamError("Output file '{}' not writable"
.format(location))
if not force and os.path.exists(location):
raise StreamError("Output file '{}' already exists"
.format(location))
# Helper function for checkout()
#
def _checkout_hardlinks(self, sandbox_vroot, directory):
......@@ -1089,11 +1164,10 @@ class Stream():
# Write all source elements to the given directory
def _write_element_sources(self, directory, elements):
for element in elements:
source_dir = os.path.join(directory, "source")
element_source_dir = os.path.join(source_dir, element.normal_name)
os.makedirs(element_source_dir)
element._stage_sources_at(element_source_dir)
element_source_dir = self._get_element_dirname(directory, element)
if list(element.sources()):
os.makedirs(element_source_dir)
element._stage_sources_at(element_source_dir)
# Write a master build script to the sandbox
def _write_build_script(self, directory, elements):
......@@ -1122,3 +1196,25 @@ class Stream():
with tarfile.open(tar_name, permissions) as tar:
tar.add(directory, arcname=element_name)
# _get_element_dirname()
#
# Get path to directory for an element based on its normal name.
#
# For cross-junction elements, the path will be prefixed with the name
# of the junction element.
#
# Args:
# directory (str): path to base directory
# element (Element): the element
#
# Returns:
# (str): Path to directory for this element
#
def _get_element_dirname(self, directory, element):
parts = [element.normal_name]
while element._get_project() != self._project:
element = element._get_project().junction
parts.append(element.normal_name)
return os.path.join(directory, *reversed(parts))
......@@ -25,6 +25,165 @@ from ._exceptions import LoadError, LoadErrorReason
BST_WORKSPACE_FORMAT_VERSION = 3
BST_WORKSPACE_LOCAL_FORMAT_VERSION = 1
WORKSPACE_LOCAL_FILE = ".bstproject.yaml"
# WorkspaceLocal()
#
# An object to contain various helper functions and data required for
# referring from a workspace back to buildstream.
#
# Args:
# directory (str): The directory that the workspace exists in
# project_path (str): The project path used to refer back
# to buildstream projects.
# element_name (str): The name of the element used to create this workspace.
class WorkspaceLocal():
def __init__(self, directory, project_path="", element_name=""):
self._projects = []
self._directory = directory
assert (project_path and element_name) or (not project_path and not element_name)
if project_path:
self._add_project(project_path, element_name)
def has_projects(self):
return True if self._projects else False
# get_default_path()
#
# Retrieves the default path to a project.
#
# Returns:
# (str): The path to a project
def get_default_path(self):
return self._projects[0]['project-path']
# get_default_element()
#
# Retrieves the name of the element that owns this workspace.
#
# Returns:
# (str): The name of an element
def get_default_element(self):
return self._projects[0]['element-name']
# to_dict()
#
# Turn the members data into a dict for serialization purposes
#
# Returns:
# (dict): A dict representation of the WorkspaceLocal
#
def to_dict(self):
ret = {
'projects': self._projects,
'format-version': BST_WORKSPACE_LOCAL_FORMAT_VERSION,
}
return ret
# from_dict()
#
# Loads a new WorkspaceLocal from a simple dictionary
#
# Args:
# directory (str): The directory that the workspace exists in
# dictionary (dict): The dict to generate a WorkspaceLocal from
#
# Returns:
# (WorkspaceLocal): A newly instantiated WorkspaceLocal
@classmethod
def from_dict(cls, directory, dictionary):
# Only know how to handle one format-version at the moment.
format_version = int(dictionary['format-version'])
assert format_version == BST_WORKSPACE_LOCAL_FORMAT_VERSION, \
"Format version {} not found in {}".format(BST_WORKSPACE_LOCAL_FORMAT_VERSION, dictionary)
workspace_local = cls(directory)
for item in dictionary['projects']:
workspace_local._add_project(item['project-path'], item['element-name'])
return workspace_local
# load()
#
# Loads the WorkspaceLocal for a given directory. This directory may be a
# subdirectory of the workspace's directory.
#
# Args:
# directory (str): The directory
# Returns:
# (WorkspaceLocal): The created WorkspaceLocal, if in a workspace, or
# (NoneType): None, if the directory is not inside a workspace.
@classmethod
def load(cls, directory):
local_dir = cls.search_for_dir(directory)
if local_dir:
workspace_file = os.path.join(local_dir, WORKSPACE_LOCAL_FILE)
data_dict = _yaml.load(workspace_file)
return cls.from_dict(local_dir, data_dict)
else:
return None
# write()
#
# Writes the WorkspaceLocal to disk
def write(self):
os.makedirs(self._directory, exist_ok=True)
_yaml.dump(self.to_dict(), self._get_filename())
# search_for_dir()
#
# Returns the directory that contains the workspace local file,
# searching upwards from search_dir.
@staticmethod
def search_for_dir(search_dir):
return utils._search_upward_for_file(search_dir, WORKSPACE_LOCAL_FILE)
def _get_filename(self):
return os.path.join(self._directory, WORKSPACE_LOCAL_FILE)
def _add_project(self, project_path, element_name):
assert (project_path and element_name)
self._projects.append({'project-path': project_path, 'element-name': element_name})
# WorkspaceLocals()
#
# A class to manage workspace local data for multiple workspaces.
#
class WorkspaceLocals():
def __init__(self):
self._locals = {} # Mapping of a workspace directory to its WorkspaceLocal
def get(self, directory):
# NOTE: Later, this will load any WorkspaceLocal found from directory
try:
local = self._locals[directory]
except KeyError:
local = WorkspaceLocal.load(directory)
if not local:
local = WorkspaceLocal(directory)
self._locals[directory] = local
return local
def add(self, directory, project_path='', element_name=''):
local = self.get(directory)
if project_path:
local._add_project(project_path, element_name)
return local
def remove(self, directory, project_path='', element_name=''):
# NOTE: project_path and element_name will only be used when I implement
# multiple owners of a workspace
local = self.get(directory)
path = local._get_filename()
try:
os.unlink(path)
except FileNotFoundError:
pass
# Workspace()
......@@ -174,10 +333,15 @@ class Workspace():
if recalculate or self._key is None:
fullpath = self.get_absolute_path()
excluded_files = [WORKSPACE_LOCAL_FILE]
# Get a list of tuples of the the project relative paths and fullpaths
if os.path.isdir(fullpath):
filelist = utils.list_relative_paths(fullpath)
filelist = [(relpath, os.path.join(fullpath, relpath)) for relpath in filelist]
filelist = [
(relpath, os.path.join(fullpath, relpath)) for relpath in filelist
if relpath not in excluded_files
]
else:
filelist = [(self.get_absolute_path(), fullpath)]
......
......@@ -35,6 +35,9 @@ cache:
# to the isize of the file system containing the cache.
quota: infinity
# Whether to pull build trees when downloading element artifacts
pull-buildtrees: False
#
# Scheduler
#
......
......@@ -85,7 +85,8 @@ import shutil
from . import _yaml
from ._variables import Variables
from ._versions import BST_CORE_ARTIFACT_VERSION
from ._exceptions import BstError, LoadError, LoadErrorReason, ImplError, ErrorDomain
from ._exceptions import BstError, LoadError, LoadErrorReason, ImplError, \
ErrorDomain
from .utils import UtilError
from . import Plugin, Consistency, Scope
from . import SandboxFlags
......@@ -1397,12 +1398,12 @@ class Element(Plugin):
with self.timed_activity("Staging local files at {}"
.format(workspace.get_absolute_path())):
workspace.stage(temp_staging_directory)
elif self._cached():
# We have a cached buildtree to use, instead
# Check if we have a cached buildtree to use
elif self.__cached_buildtree():
artifact_base, _ = self.__extract()
import_dir = os.path.join(artifact_base, 'buildtree')
else:
# No workspace, stage directly
# No workspace or cached buildtree, stage source directly
for source in self.sources():
source._stage(temp_staging_directory)
......@@ -1553,7 +1554,6 @@ class Element(Plugin):
self.__dynamic_public = _yaml.node_copy(self.__public)
# Call the abstract plugin methods
collect = None
try:
# Step 1 - Configure
self.configure_sandbox(sandbox)
......@@ -1564,7 +1564,7 @@ class Element(Plugin):
# Step 4 - Assemble
collect = self.assemble(sandbox) # pylint: disable=assignment-from-no-return
self.__set_build_result(success=True, description="succeeded")
except BstError as e:
except ElementError as e:
# Shelling into a sandbox is useful to debug this error
e.sandbox = True
......@@ -1586,104 +1586,105 @@ class Element(Plugin):
self.warn("Failed to preserve workspace state for failed build sysroot: {}"
.format(e))
if isinstance(e, ElementError):
collect = e.collect # pylint: disable=no-member
self.__set_build_result(success=False, description=str(e), detail=e.detail)
self._cache_artifact(rootdir, sandbox, e.collect)
raise
else:
return self._cache_artifact(rootdir, sandbox, collect)
finally:
if collect is not None:
try:
sandbox_vroot = sandbox.get_virtual_directory()
collectvdir = sandbox_vroot.descend(collect.lstrip(os.sep).split(os.sep))
except VirtualDirectoryError:
# No collect directory existed
collectvdir = None
# Create artifact directory structure
assembledir = os.path.join(rootdir, 'artifact')
filesdir = os.path.join(assembledir, 'files')
logsdir = os.path.join(assembledir, 'logs')
metadir = os.path.join(assembledir, 'meta')
buildtreedir = os.path.join(assembledir, 'buildtree')
os.mkdir(assembledir)
if collect is not None and collectvdir is not None:
os.mkdir(filesdir)
os.mkdir(logsdir)
os.mkdir(metadir)
os.mkdir(buildtreedir)
# Hard link files from collect dir to files directory
if collect is not None and collectvdir is not None:
collectvdir.export_files(filesdir, can_link=True)
try:
sandbox_vroot = sandbox.get_virtual_directory()
sandbox_build_dir = sandbox_vroot.descend(
self.get_variable('build-root').lstrip(os.sep).split(os.sep))
# Hard link files from build-root dir to buildtreedir directory
sandbox_build_dir.export_files(buildtreedir)
except VirtualDirectoryError:
# Directory could not be found. Pre-virtual
# directory behaviour was to continue silently
# if the directory could not be found.
pass
# Copy build log
log_filename = context.get_log_filename()
self._build_log_path = os.path.join(logsdir, 'build.log')
if log_filename:
shutil.copyfile(log_filename, self._build_log_path)
# Store public data
_yaml.dump(_yaml.node_sanitize(self.__dynamic_public), os.path.join(metadir, 'public.yaml'))
# Store result
build_result_dict = {"success": self.__build_result[0], "description": self.__build_result[1]}
if self.__build_result[2] is not None:
build_result_dict["detail"] = self.__build_result[2]
_yaml.dump(build_result_dict, os.path.join(metadir, 'build-result.yaml'))
# ensure we have cache keys
self._assemble_done()
# Store keys.yaml
_yaml.dump(_yaml.node_sanitize({
'strong': self._get_cache_key(),
'weak': self._get_cache_key(_KeyStrength.WEAK),
}), os.path.join(metadir, 'keys.yaml'))
# Store dependencies.yaml
_yaml.dump(_yaml.node_sanitize({
e.name: e._get_cache_key() for e in self.dependencies(Scope.BUILD)
}), os.path.join(metadir, 'dependencies.yaml'))
# Store workspaced.yaml
_yaml.dump(_yaml.node_sanitize({
'workspaced': True if self._get_workspace() else False
}), os.path.join(metadir, 'workspaced.yaml'))
# Store workspaced-dependencies.yaml
_yaml.dump(_yaml.node_sanitize({
'workspaced-dependencies': [
e.name for e in self.dependencies(Scope.BUILD)
if e._get_workspace()
]
}), os.path.join(metadir, 'workspaced-dependencies.yaml'))
with self.timed_activity("Caching artifact"):
artifact_size = utils._get_dir_size(assembledir)
self.__artifacts.commit(self, assembledir, self.__get_cache_keys_for_commit())
if collect is not None and collectvdir is None:
raise ElementError(
"Directory '{}' was not found inside the sandbox, "
"unable to collect artifact contents"
.format(collect))
# Finally cleanup the build dir
cleanup_rootdir()
def _cache_artifact(self, rootdir, sandbox, collect):
if collect is not None:
try:
sandbox_vroot = sandbox.get_virtual_directory()
collectvdir = sandbox_vroot.descend(collect.lstrip(os.sep).split(os.sep))
except VirtualDirectoryError:
# No collect directory existed
collectvdir = None
# Create artifact directory structure
assembledir = os.path.join(rootdir, 'artifact')
filesdir = os.path.join(assembledir, 'files')
logsdir = os.path.join(assembledir, 'logs')
metadir = os.path.join(assembledir, 'meta')
buildtreedir = os.path.join(assembledir, 'buildtree')
os.mkdir(assembledir)
if collect is not None and collectvdir is not None:
os.mkdir(filesdir)
os.mkdir(logsdir)
os.mkdir(metadir)
os.mkdir(buildtreedir)
# Hard link files from collect dir to files directory
if collect is not None and collectvdir is not None:
collectvdir.export_files(filesdir, can_link=True)
try:
sandbox_vroot = sandbox.get_virtual_directory()
sandbox_build_dir = sandbox_vroot.descend(
self.get_variable('build-root').lstrip(os.sep).split(os.sep))
# Hard link files from build-root dir to buildtreedir directory
sandbox_build_dir.export_files(buildtreedir)
except VirtualDirectoryError:
# Directory could not be found. Pre-virtual
# directory behaviour was to continue silently
# if the directory could not be found.
pass
# Copy build log
log_filename = self._get_context().get_log_filename()
self._build_log_path = os.path.join(logsdir, 'build.log')
if log_filename:
shutil.copyfile(log_filename, self._build_log_path)
# Store public data
_yaml.dump(_yaml.node_sanitize(self.__dynamic_public), os.path.join(metadir, 'public.yaml'))
# Store result
build_result_dict = {"success": self.__build_result[0], "description": self.__build_result[1]}
if self.__build_result[2] is not None:
build_result_dict["detail"] = self.__build_result[2]
_yaml.dump(build_result_dict, os.path.join(metadir, 'build-result.yaml'))
# ensure we have cache keys
self._assemble_done()
# Store keys.yaml
_yaml.dump(_yaml.node_sanitize({
'strong': self._get_cache_key(),
'weak': self._get_cache_key(_KeyStrength.WEAK),
}), os.path.join(metadir, 'keys.yaml'))
# Store dependencies.yaml
_yaml.dump(_yaml.node_sanitize({
e.name: e._get_cache_key() for e in self.dependencies(Scope.BUILD)
}), os.path.join(metadir, 'dependencies.yaml'))
# Store workspaced.yaml
_yaml.dump(_yaml.node_sanitize({
'workspaced': True if self._get_workspace() else False
}), os.path.join(metadir, 'workspaced.yaml'))
# Store workspaced-dependencies.yaml
_yaml.dump(_yaml.node_sanitize({
'workspaced-dependencies': [
e.name for e in self.dependencies(Scope.BUILD)
if e._get_workspace()
]
}), os.path.join(metadir, 'workspaced-dependencies.yaml'))
with self.timed_activity("Caching artifact"):
artifact_size = utils._get_dir_size(assembledir)
self.__artifacts.commit(self, assembledir, self.__get_cache_keys_for_commit())
if collect is not None and collectvdir is None:
raise ElementError(
"Directory '{}' was not found inside the sandbox, "
"unable to collect artifact contents"
.format(collect))
return artifact_size
def _get_build_log(self):
......@@ -1691,7 +1692,9 @@ class Element(Plugin):
# _pull_pending()
#
# Check whether the artifact will be pulled.
# Check whether the artifact will be pulled. If the pull operation is to
# include a specific subdir of the element artifact (from cli or user conf)
# then the local cache is queried for the subdirs existence.
#
# Returns:
# (bool): Whether a pull operation is pending
......@@ -1701,8 +1704,15 @@ class Element(Plugin):
# Workspace builds are never pushed to artifact servers
return False
if self.__strong_cached:
# Artifact already in local cache
# Check whether the pull has been invoked with a specific subdir requested
# in user context, as to complete a partial artifact
subdir, _ = self.__pull_directories()
if self.__strong_cached and subdir:
# If we've specified a subdir, check if the subdir is cached locally
if self.__artifacts.contains_subdir_artifact(self, self.__strict_cache_key, subdir):
return False
elif self.__strong_cached:
return False
# Pull is pending if artifact remote server available
......@@ -1724,33 +1734,6 @@ class Element(Plugin):
self._update_state()
def _pull_strong(self, *, progress=None):
weak_key = self._get_cache_key(strength=_KeyStrength.WEAK)
key = self.__strict_cache_key
if not self.__artifacts.pull(self, key, progress=progress):
return False
# update weak ref by pointing it to this newly fetched artifact
self.__artifacts.link_key(self, key, weak_key)
return True
def _pull_weak(self, *, progress=None):
weak_key = self._get_cache_key(strength=_KeyStrength.WEAK)
if not self.__artifacts.pull(self, weak_key, progress=progress):
return False
# extract strong cache key from this newly fetched artifact
self._pull_done()
# create tag for strong cache key
key = self._get_cache_key(strength=_KeyStrength.STRONG)
self.__artifacts.link_key(self, weak_key, key)
return True
# _pull():
#
# Pull artifact from remote artifact repository into local artifact cache.
......@@ -1763,11 +1746,15 @@ class Element(Plugin):
def progress(percent, message):
self.status(message)
# Get optional specific subdir to pull and optional list to not pull
# based off of user context
subdir, excluded_subdirs = self.__pull_directories()
# Attempt to pull artifact without knowing whether it's available
pulled = self._pull_strong(progress=progress)
pulled = self.__pull_strong(progress=progress, subdir=subdir, excluded_subdirs=excluded_subdirs)
if not pulled and not self._cached() and not context.get_strict():
pulled = self._pull_weak(progress=progress)
pulled = self.__pull_weak(progress=progress, subdir=subdir, excluded_subdirs=excluded_subdirs)
if not pulled:
return False
......@@ -1787,10 +1774,12 @@ class Element(Plugin):
# No push remotes for this element's project
return True
if not self._cached():
# Do not push elements that aren't cached, or that are cached with a dangling buildtree
# artifact unless element type is expected to have an an empty buildtree directory
if not self.__cached_buildtree():
return True
# Do not push tained artifact
# Do not push tainted artifact
if self.__get_tainted():
return True
......@@ -2674,6 +2663,106 @@ class Element(Plugin):
return utils._deduplicate(keys)
# __pull_strong():
#
# Attempt pulling given element from configured artifact caches with
# the strict cache key
#
# Args:
# progress (callable): The progress callback, if any
# subdir (str): The optional specific subdir to pull
# excluded_subdirs (list): The optional list of subdirs to not pull
#
# Returns:
# (bool): Whether or not the pull was successful
#
def __pull_strong(self, *, progress=None, subdir=None, excluded_subdirs=None):
weak_key = self._get_cache_key(strength=_KeyStrength.WEAK)
key = self.__strict_cache_key
if not self.__artifacts.pull(self, key, progress=progress, subdir=subdir,
excluded_subdirs=excluded_subdirs):
return False
# update weak ref by pointing it to this newly fetched artifact
self.__artifacts.link_key(self, key, weak_key)
return True
# __pull_weak():
#
# Attempt pulling given element from configured artifact caches with
# the weak cache key
#
# Args:
# progress (callable): The progress callback, if any
# subdir (str): The optional specific subdir to pull
# excluded_subdirs (list): The optional list of subdirs to not pull
#
# Returns:
# (bool): Whether or not the pull was successful
#
def __pull_weak(self, *, progress=None, subdir=None, excluded_subdirs=None):
weak_key = self._get_cache_key(strength=_KeyStrength.WEAK)
if not self.__artifacts.pull(self, weak_key, progress=progress, subdir=subdir,
excluded_subdirs=excluded_subdirs):
return False
# extract strong cache key from this newly fetched artifact
self._pull_done()
# create tag for strong cache key
key = self._get_cache_key(strength=_KeyStrength.STRONG)
self.__artifacts.link_key(self, weak_key, key)
return True
# __cached_buildtree():
#
# Check if cached element artifact contains expected buildtree
#
# Returns:
# (bool): True if artifact cached with buildtree, False if
# element not cached or missing expected buildtree
#
def __cached_buildtree(self):
context = self._get_context()
if not self._cached():
return False
elif context.get_strict():
if not self.__artifacts.contains_subdir_artifact(self, self.__strict_cache_key, 'buildtree'):
return False
elif not self.__artifacts.contains_subdir_artifact(self, self.__weak_cache_key, 'buildtree'):
return False
return True
# __pull_directories():
#
# Which directories to include or exclude given the current
# context
#
# Returns:
# subdir (str): The optional specific subdir to include, based
# on user context
# excluded_subdirs (list): The optional list of subdirs to not
# pull, referenced against subdir value
#
def __pull_directories(self):
context = self._get_context()
# Current default exclusions on pull
excluded_subdirs = ["buildtree"]
subdir = ''
# If buildtrees are to be pulled, remove the value from exclusion list
# and set specific subdir
if context.pull_buildtrees:
subdir = "buildtree"
excluded_subdirs.remove(subdir)
return (subdir, excluded_subdirs)
def _overlap_error_detail(f, forbidden_overlap_elements, elements):
if forbidden_overlap_elements:
......
......@@ -19,7 +19,7 @@ variables:
cmake-args: |
-DCMAKE_INSTALL_PREFIX:PATH="%{prefix}" \
-DCMAKE_INSTALL_LIBDIR=%{lib} %{cmake-extra} %{cmake-global} %{cmake-local}
-DCMAKE_INSTALL_LIBDIR:PATH="%{lib}" %{cmake-extra} %{cmake-global} %{cmake-local}
cmake: |
......
......@@ -86,7 +86,6 @@ This plugin also utilises the following configurable core plugin warnings:
"""
import os
import errno
import re
import shutil
from collections.abc import Mapping
......@@ -97,6 +96,7 @@ from configparser import RawConfigParser
from buildstream import Source, SourceError, Consistency, SourceFetcher
from buildstream import utils
from buildstream.plugin import CoreWarnings
from buildstream.utils import move_atomic, DirectoryExistsError
GIT_MODULES = '.gitmodules'
......@@ -141,21 +141,16 @@ class GitMirror(SourceFetcher):
fail="Failed to clone git repository {}".format(url),
fail_temporarily=True)
# Attempt atomic rename into destination, this will fail if
# another process beat us to the punch
try:
os.rename(tmpdir, self.mirror)
move_atomic(tmpdir, self.mirror)
except DirectoryExistsError:
# Another process was quicker to download this repository.
# Let's discard our own
self.source.status("{}: Discarding duplicate clone of {}"
.format(self.source, url))
except OSError as e:
# When renaming and the destination repo already exists, os.rename()
# will fail with ENOTEMPTY, since an empty directory will be silently
# replaced
if e.errno == errno.ENOTEMPTY:
self.source.status("{}: Discarding duplicate clone of {}"
.format(self.source, url))
else:
raise SourceError("{}: Failed to move cloned git repository {} from '{}' to '{}': {}"
.format(self.source, url, tmpdir, self.mirror, e)) from e
raise SourceError("{}: Failed to move cloned git repository {} from '{}' to '{}': {}"
.format(self.source, url, tmpdir, self.mirror, e)) from e
def _fetch(self, alias_override=None):
url = self.source.translate_url(self.url,
......
......@@ -68,7 +68,6 @@ details on common configuration options for sources.
The ``pip`` plugin is available since :ref:`format version 16 <project_format_version>`
"""
import errno
import hashlib
import os
import re
......@@ -80,6 +79,7 @@ _PYPI_INDEX_URL = 'https://pypi.org/simple/'
# Used only for finding pip command
_PYTHON_VERSIONS = [
'python', # when running in a venv, we might not have the exact version
'python2.7',
'python3.0',
'python3.1',
......@@ -192,13 +192,14 @@ class PipSource(Source):
# process has fetched the sources before us and ensure that we do
# not raise an error in that case.
try:
os.makedirs(self._mirror)
os.rename(package_dir, self._mirror)
except FileExistsError:
return
utils.move_atomic(package_dir, self._mirror)
except utils.DirectoryExistsError:
# Another process has beaten us and has fetched the sources
# before us.
pass
except OSError as e:
if e.errno != errno.ENOTEMPTY:
raise
raise SourceError("{}: Failed to move downloaded pip packages from '{}' to '{}': {}"
.format(self, package_dir, self._mirror, e)) from e
def stage(self, directory):
with self.timed_activity("Staging Python packages", silent_nested=True):
......
......@@ -17,6 +17,8 @@
# Authors:
# Andrew Leeming <andrew.leeming@codethink.co.uk>
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
import collections
import json
import os
import sys
import time
......@@ -24,7 +26,8 @@ import errno
import signal
import subprocess
import shutil
from contextlib import ExitStack
from contextlib import ExitStack, suppress
from tempfile import TemporaryFile
import psutil
......@@ -53,6 +56,7 @@ class SandboxBwrap(Sandbox):
super().__init__(*args, **kwargs)
self.user_ns_available = kwargs['user_ns_available']
self.die_with_parent_available = kwargs['die_with_parent_available']
self.json_status_available = kwargs['json_status_available']
def run(self, command, flags, *, cwd=None, env=None):
stdout, stderr = self._get_output()
......@@ -160,24 +164,31 @@ class SandboxBwrap(Sandbox):
gid = self._get_config().build_gid
bwrap_command += ['--uid', str(uid), '--gid', str(gid)]
# Add the command
bwrap_command += command
# bwrap might create some directories while being suid
# and may give them to root gid, if it does, we'll want
# to clean them up after, so record what we already had
# there just in case so that we can safely cleanup the debris.
#
existing_basedirs = {
directory: os.path.exists(os.path.join(root_directory, directory))
for directory in ['tmp', 'dev', 'proc']
}
# Use the MountMap context manager to ensure that any redirected
# mounts through fuse layers are in context and ready for bwrap
# to mount them from.
#
with ExitStack() as stack:
pass_fds = ()
# Improve error reporting with json-status if available
if self.json_status_available:
json_status_file = stack.enter_context(TemporaryFile())
pass_fds = (json_status_file.fileno(),)
bwrap_command += ['--json-status-fd', str(json_status_file.fileno())]
# Add the command
bwrap_command += command
# bwrap might create some directories while being suid
# and may give them to root gid, if it does, we'll want
# to clean them up after, so record what we already had
# there just in case so that we can safely cleanup the debris.
#
existing_basedirs = {
directory: os.path.exists(os.path.join(root_directory, directory))
for directory in ['tmp', 'dev', 'proc']
}
# Use the MountMap context manager to ensure that any redirected
# mounts through fuse layers are in context and ready for bwrap
# to mount them from.
#
stack.enter_context(mount_map.mounted(self))
# If we're interactive, we want to inherit our stdin,
......@@ -190,7 +201,7 @@ class SandboxBwrap(Sandbox):
# Run bubblewrap !
exit_code = self.run_bwrap(bwrap_command, stdin, stdout, stderr,
(flags & SandboxFlags.INTERACTIVE))
(flags & SandboxFlags.INTERACTIVE), pass_fds)
# Cleanup things which bwrap might have left behind, while
# everything is still mounted because bwrap can be creating
......@@ -238,10 +249,27 @@ class SandboxBwrap(Sandbox):
# a bug, bwrap mounted a tempfs here and when it exits, that better be empty.
pass
if self.json_status_available:
json_status_file.seek(0, 0)
child_exit_code = None
# The JSON status file's output is a JSON object per line
# with the keys present identifying the type of message.
# The only message relevant to us now is the exit-code of the subprocess.
for line in json_status_file:
with suppress(json.decoder.JSONDecodeError):
o = json.loads(line)
if isinstance(o, collections.abc.Mapping) and 'exit-code' in o:
child_exit_code = o['exit-code']
break
if child_exit_code is None:
raise SandboxError("`bwrap' terminated during sandbox setup with exitcode {}".format(exit_code),
reason="bwrap-sandbox-fail")
exit_code = child_exit_code
self._vdir._mark_changed()
return exit_code
def run_bwrap(self, argv, stdin, stdout, stderr, interactive):
def run_bwrap(self, argv, stdin, stdout, stderr, interactive, pass_fds):
# Wrapper around subprocess.Popen() with common settings.
#
# This function blocks until the subprocess has terminated.
......@@ -317,6 +345,7 @@ class SandboxBwrap(Sandbox):
# The default is to share file descriptors from the parent process
# to the subprocess, which is rarely good for sandboxing.
close_fds=True,
pass_fds=pass_fds,
stdin=stdin,
stdout=stdout,
stderr=stderr,
......