Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • willsalmon/buildstream
  • CumHoleZH/buildstream
  • tchaik/buildstream
  • DCotyPortfolio/buildstream
  • jesusoctavioas/buildstream
  • patrickmmartin/buildstream
  • franred/buildstream
  • tintou/buildstream
  • alatiera/buildstream
  • martinblanchard/buildstream
  • neverdie22042524/buildstream
  • Mattlk13/buildstream
  • PServers/buildstream
  • phamnghia610909/buildstream
  • chiaratolentino/buildstream
  • eysz7-x-x/buildstream
  • kerrick1/buildstream
  • matthew-yates/buildstream
  • twofeathers/buildstream
  • mhadjimichael/buildstream
  • pointswaves/buildstream
  • Mr.JackWilson/buildstream
  • Tw3akG33k/buildstream
  • AlexFazakas/buildstream
  • eruidfkiy/buildstream
  • clamotion2/buildstream
  • nanonyme/buildstream
  • wickyjaaa/buildstream
  • nmanchev/buildstream
  • bojorquez.ja/buildstream
  • mostynb/buildstream
  • highpit74/buildstream
  • Demo112/buildstream
  • ba2014sheer/buildstream
  • tonimadrino/buildstream
  • usuario2o/buildstream
  • Angelika123456/buildstream
  • neo355/buildstream
  • corentin-ferlay/buildstream
  • coldtom/buildstream
  • wifitvbox81/buildstream
  • 358253885/buildstream
  • seanborg/buildstream
  • SotK/buildstream
  • DouglasWinship/buildstream
  • karansthr97/buildstream
  • louib/buildstream
  • bwh-ct/buildstream
  • robjh/buildstream
  • we88c0de/buildstream
  • zhengxian5555/buildstream
51 results
Show changes
Commits on Source (43)
Showing
with 329 additions and 199 deletions
......@@ -294,7 +294,7 @@ committed with that.
To do this, first ensure you have ``click_man`` installed, possibly
with::
pip install --user click_man
pip3 install --user click_man
Then, in the toplevel directory of buildstream, run the following::
......@@ -450,7 +450,7 @@ To run the tests, just type::
At the toplevel.
When debugging a test, it can be desirable to see the stdout
and stderr generated by a test, to do this use the --addopts
and stderr generated by a test, to do this use the ``--addopts``
function to feed arguments to pytest as such::
./setup.py test --addopts -s
......@@ -530,7 +530,7 @@ tool.
Python provides `cProfile <https://docs.python.org/3/library/profile.html>`_
which gives you a list of all functions called during execution and how much
time was spent in each function. Here is an example of running `bst --help`
time was spent in each function. Here is an example of running ``bst --help``
under cProfile:
python3 -m cProfile -o bst.cprofile -- $(which bst) --help
......
......@@ -25,7 +25,7 @@ BuildStream offers the following advantages:
* **Declarative build instructions/definitions**
BuildStream provides a a flexible and extensible framework for the modelling
BuildStream provides a flexible and extensible framework for the modelling
of software build pipelines in a declarative YAML format, which allows you to
manipulate filesystem data in a controlled, reproducible sandboxed environment.
......@@ -61,25 +61,29 @@ How does BuildStream work?
==========================
BuildStream operates on a set of YAML files (.bst files), as follows:
* loads the YAML files which describe the target(s) and all dependencies
* evaluates the version information and build instructions to calculate a build
* Loads the YAML files which describe the target(s) and all dependencies.
* Evaluates the version information and build instructions to calculate a build
graph for the target(s) and all dependencies and unique cache-keys for each
element
* retrieves elements from cache if they are already built, or builds them in a
sandboxed environment using the instructions declared in the .bst files
* transforms/configures and/or deploys the resulting target(s) based on the
element.
* Retrieves previously built elements (artifacts) from a local/remote cache, or
builds the elements in a sandboxed environment using the instructions declared
in the .bst files.
* Transforms/configures and/or deploys the resulting target(s) based on the
instructions declared in the .bst files.
How can I get started?
======================
The easiest way to get started is to explore some existing .bst files, for example:
To start using BuildStream, first,
`install <https://buildstream.gitlab.io/buildstream/main_install.html>`_
BuildStream onto your machine and then follow our
`tutorial <https://buildstream.gitlab.io/buildstream/using_tutorial.html>`_.
We also recommend exploring some existing BuildStream projects:
* https://gitlab.gnome.org/GNOME/gnome-build-meta/
* https://gitlab.com/freedesktop-sdk/freedesktop-sdk
* https://gitlab.com/baserock/definitions
* https://gitlab.com/BuildStream/buildstream-examples/tree/master/build-x86image
* https://gitlab.com/BuildStream/buildstream-examples/tree/master/netsurf-flatpak
If you have any questions please ask on our `#buildstream <irc://irc.gnome.org/buildstream>`_ channel in `irc.gnome.org <irc://irc.gnome.org>`_
......@@ -240,8 +240,8 @@ class CASCache(ArtifactCache):
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.NOT_FOUND:
element.info("{} not found at remote {}".format(element._get_brief_display_key(), remote.spec.url))
raise
raise ArtifactError("Failed to pull artifact {}: {}".format(
element._get_brief_display_key(), e)) from e
return False
......@@ -286,6 +286,7 @@ class CASCache(ArtifactCache):
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.NOT_FOUND:
# Intentionally re-raise RpcError for outer except block.
raise
missing_blobs = {}
......@@ -341,7 +342,7 @@ class CASCache(ArtifactCache):
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.RESOURCE_EXHAUSTED:
raise ArtifactError("Failed to push artifact {}: {}".format(refs, e)) from e
raise ArtifactError("Failed to push artifact {}: {}".format(refs, e), temporary=True) from e
return pushed
......
......@@ -88,6 +88,7 @@ class ErrorDomain(Enum):
ELEMENT = 11
APP = 12
STREAM = 13
VIRTUAL_FS = 14
# BstError is an internal base exception class for BuildSream
......@@ -99,7 +100,7 @@ class ErrorDomain(Enum):
#
class BstError(Exception):
def __init__(self, message, *, detail=None, domain=None, reason=None):
def __init__(self, message, *, detail=None, domain=None, reason=None, temporary=False):
global _last_exception
super().__init__(message)
......@@ -114,6 +115,11 @@ class BstError(Exception):
#
self.sandbox = None
# When this exception occurred during the handling of a job, indicate
# whether or not there is any point retrying the job.
#
self.temporary = temporary
# Error domain and reason
#
self.domain = domain
......@@ -131,8 +137,8 @@ class BstError(Exception):
# or by the base :class:`.Plugin` element itself.
#
class PluginError(BstError):
def __init__(self, message, reason=None):
super().__init__(message, domain=ErrorDomain.PLUGIN, reason=reason)
def __init__(self, message, reason=None, temporary=False):
super().__init__(message, domain=ErrorDomain.PLUGIN, reason=reason, temporary=False)
# LoadErrorReason
......@@ -249,8 +255,8 @@ class SandboxError(BstError):
# Raised when errors are encountered in the artifact caches
#
class ArtifactError(BstError):
def __init__(self, message, *, detail=None, reason=None):
super().__init__(message, detail=detail, domain=ErrorDomain.ARTIFACT, reason=reason)
def __init__(self, message, *, detail=None, reason=None, temporary=False):
super().__init__(message, detail=detail, domain=ErrorDomain.ARTIFACT, reason=reason, temporary=True)
# PipelineError
......
......@@ -270,6 +270,10 @@ class App():
# Exit with the error
self._error_exit(e)
except RecursionError:
click.echo("RecursionError: Depency depth is too large. Maximum recursion depth exceeded.",
err=True)
sys.exit(-1)
else:
# No exceptions occurred, print session time and summary
......
......@@ -35,6 +35,12 @@ from ..._exceptions import ImplError, BstError, set_last_task_error
from ..._message import Message, MessageType, unconditional_messages
from ... import _signals, utils
# Return code values shutdown of job handling child processes
#
RC_OK = 0
RC_FAIL = 1
RC_PERM_FAIL = 2
# Used to distinguish between status messages and return values
class Envelope():
......@@ -111,6 +117,10 @@ class Job():
self._max_retries = max_retries # Maximum number of automatic retries
self._result = None # Return value of child action in the parent
self._tries = 0 # Try count, for retryable jobs
# If False, a retry will not be attempted regardless of whether _tries is less than _max_retries.
#
self._retry_flag = True
self._logfile = logfile
self._task_id = None
......@@ -388,8 +398,9 @@ class Job():
result = self.child_process()
except BstError as e:
elapsed = datetime.datetime.now() - starttime
self._retry_flag = e.temporary
if self._tries <= self._max_retries:
if self._retry_flag and (self._tries <= self._max_retries):
self.message(MessageType.FAIL,
"Try #{} failed, retrying".format(self._tries),
elapsed=elapsed)
......@@ -402,7 +413,10 @@ class Job():
# Report the exception to the parent (for internal testing purposes)
self._child_send_error(e)
self._child_shutdown(1)
# Set return code based on whether or not the error was temporary.
#
self._child_shutdown(RC_FAIL if self._retry_flag else RC_PERM_FAIL)
except Exception as e: # pylint: disable=broad-except
......@@ -416,7 +430,7 @@ class Job():
self.message(MessageType.BUG, self.action_name,
elapsed=elapsed, detail=detail,
logfile=filename)
self._child_shutdown(1)
self._child_shutdown(RC_FAIL)
else:
# No exception occurred in the action
......@@ -430,7 +444,7 @@ class Job():
# Shutdown needs to stay outside of the above context manager,
# make sure we dont try to handle SIGTERM while the process
# is already busy in sys.exit()
self._child_shutdown(0)
self._child_shutdown(RC_OK)
# _child_send_error()
#
......@@ -495,7 +509,8 @@ class Job():
message.action_name = self.action_name
message.task_id = self._task_id
if message.message_type == MessageType.FAIL and self._tries <= self._max_retries:
if (message.message_type == MessageType.FAIL and
self._tries <= self._max_retries and self._retry_flag):
# Job will be retried, display failures as warnings in the frontend
message.message_type = MessageType.WARN
......@@ -529,12 +544,17 @@ class Job():
def _parent_child_completed(self, pid, returncode):
self._parent_shutdown()
if returncode != 0 and self._tries <= self._max_retries:
# We don't want to retry if we got OK or a permanent fail.
# This is set in _child_action but must also be set for the parent.
#
self._retry_flag = returncode not in (RC_OK, RC_PERM_FAIL)
if self._retry_flag and (self._tries <= self._max_retries):
self.spawn()
return
self.parent_complete(returncode == 0, self._result)
self._scheduler.job_completed(self, returncode == 0)
self.parent_complete(returncode == RC_OK, self._result)
self._scheduler.job_completed(self, returncode == RC_OK)
# _parent_process_envelope()
#
......
......@@ -407,15 +407,16 @@ class Stream():
integrate=integrate) as sandbox:
# Copy or move the sandbox to the target directory
sandbox_root = sandbox.get_directory()
sandbox_vroot = sandbox.get_virtual_directory()
if not tar:
with target.timed_activity("Checking out files in '{}'"
.format(location)):
try:
if hardlinks:
self._checkout_hardlinks(sandbox_root, location)
self._checkout_hardlinks(sandbox_vroot, location)
else:
utils.copy_files(sandbox_root, location)
sandbox_vroot.export_files(location)
except OSError as e:
raise StreamError("Failed to checkout files: '{}'"
.format(e)) from e
......@@ -424,14 +425,12 @@ class Stream():
with target.timed_activity("Creating tarball"):
with os.fdopen(sys.stdout.fileno(), 'wb') as fo:
with tarfile.open(fileobj=fo, mode="w|") as tf:
Stream._add_directory_to_tarfile(
tf, sandbox_root, '.')
sandbox_vroot.export_to_tar(tf, '.')
else:
with target.timed_activity("Creating tarball '{}'"
.format(location)):
with tarfile.open(location, "w:") as tf:
Stream._add_directory_to_tarfile(
tf, sandbox_root, '.')
sandbox_vroot.export_to_tar(tf, '.')
except BstError as e:
raise StreamError("Error while staging dependencies into a sandbox"
......@@ -476,7 +475,7 @@ class Stream():
# Check for workspace config
workspace = workspaces.get_workspace(target._get_full_name())
if workspace:
if workspace and not force:
raise StreamError("Workspace '{}' is already defined at: {}"
.format(target.name, workspace.path))
......@@ -495,6 +494,10 @@ class Stream():
"fetch the latest version of the " +
"source.")
if workspace:
workspaces.delete_workspace(target._get_full_name())
workspaces.save_config()
shutil.rmtree(directory)
try:
os.makedirs(directory, exist_ok=True)
except OSError as e:
......@@ -1046,46 +1049,13 @@ class Stream():
# Helper function for checkout()
#
def _checkout_hardlinks(self, sandbox_root, directory):
def _checkout_hardlinks(self, sandbox_vroot, directory):
try:
removed = utils.safe_remove(directory)
utils.safe_remove(directory)
except OSError as e:
raise StreamError("Failed to remove checkout directory: {}".format(e)) from e
if removed:
# Try a simple rename of the sandbox root; if that
# doesnt cut it, then do the regular link files code path
try:
os.rename(sandbox_root, directory)
except OSError:
os.makedirs(directory, exist_ok=True)
utils.link_files(sandbox_root, directory)
else:
utils.link_files(sandbox_root, directory)
# Add a directory entry deterministically to a tar file
#
# This function takes extra steps to ensure the output is deterministic.
# First, it sorts the results of os.listdir() to ensure the ordering of
# the files in the archive is the same. Second, it sets a fixed
# timestamp for each entry. See also https://bugs.python.org/issue24465.
@staticmethod
def _add_directory_to_tarfile(tf, dir_name, dir_arcname, mtime=0):
for filename in sorted(os.listdir(dir_name)):
name = os.path.join(dir_name, filename)
arcname = os.path.join(dir_arcname, filename)
tarinfo = tf.gettarinfo(name, arcname)
tarinfo.mtime = mtime
if tarinfo.isreg():
with open(name, "rb") as f:
tf.addfile(tarinfo, f)
elif tarinfo.isdir():
tf.addfile(tarinfo)
Stream._add_directory_to_tarfile(tf, name, arcname, mtime)
else:
tf.addfile(tarinfo)
sandbox_vroot.export_files(directory, can_link=True, can_destroy=True)
# Write the element build script to the given directory
def _write_element_script(self, directory, element):
......
......@@ -23,7 +23,7 @@
# This version is bumped whenever enhancements are made
# to the `project.conf` format or the core element format.
#
BST_FORMAT_VERSION = 9
BST_FORMAT_VERSION = 10
# The base BuildStream artifact version
......
......@@ -80,7 +80,6 @@ from collections import Mapping, OrderedDict
from contextlib import contextmanager
from enum import Enum
import tempfile
import time
import shutil
from . import _yaml
......@@ -97,6 +96,9 @@ from . import _site
from ._platform import Platform
from .sandbox._config import SandboxConfig
from .storage.directory import Directory
from .storage._filebaseddirectory import FileBasedDirectory, VirtualDirectoryError
# _KeyStrength():
#
......@@ -140,9 +142,10 @@ class ElementError(BstError):
message (str): The error message to report to the user
detail (str): A possibly multiline, more detailed error message
reason (str): An optional machine readable reason string, used for test cases
temporary (bool): An indicator to whether the error may occur if the operation was run again. (*Since: 1.2*)
"""
def __init__(self, message, *, detail=None, reason=None):
super().__init__(message, detail=detail, domain=ErrorDomain.ELEMENT, reason=reason)
def __init__(self, message, *, detail=None, reason=None, temporary=False):
super().__init__(message, detail=detail, domain=ErrorDomain.ELEMENT, reason=reason, temporary=temporary)
class Element(Plugin):
......@@ -191,6 +194,13 @@ class Element(Plugin):
*Since: 1.2*
"""
BST_VIRTUAL_DIRECTORY = False
"""Whether to raise exceptions if an element uses Sandbox.get_directory
instead of Sandbox.get_virtual_directory.
*Since: 1.4*
"""
def __init__(self, context, project, artifacts, meta, plugin_conf):
self.__cache_key_dict = None # Dict for cache key calculation
......@@ -620,10 +630,10 @@ class Element(Plugin):
# Hard link it into the staging area
#
basedir = sandbox.get_directory()
stagedir = basedir \
vbasedir = sandbox.get_virtual_directory()
vstagedir = vbasedir \
if path is None \
else os.path.join(basedir, path.lstrip(os.sep))
else vbasedir.descend(path.lstrip(os.sep).split(os.sep))
files = list(self.__compute_splits(include, exclude, orphans))
......@@ -635,15 +645,8 @@ class Element(Plugin):
link_files = files
copy_files = []
link_result = utils.link_files(artifact, stagedir, files=link_files,
report_written=True)
copy_result = utils.copy_files(artifact, stagedir, files=copy_files,
report_written=True)
cur_time = time.time()
for f in copy_result.files_written:
os.utime(os.path.join(stagedir, f), times=(cur_time, cur_time))
link_result = vstagedir.import_files(artifact, files=link_files, report_written=True, can_link=True)
copy_result = vstagedir.import_files(artifact, files=copy_files, report_written=True, update_utimes=True)
return link_result.combine(copy_result)
......@@ -1300,40 +1303,45 @@ class Element(Plugin):
sandbox._set_mount_source(directory, workspace.get_absolute_path())
# Stage all sources that need to be copied
sandbox_root = sandbox.get_directory()
host_directory = os.path.join(sandbox_root, directory.lstrip(os.sep))
self._stage_sources_at(host_directory, mount_workspaces=mount_workspaces)
sandbox_vroot = sandbox.get_virtual_directory()
host_vdirectory = sandbox_vroot.descend(directory.lstrip(os.sep).split(os.sep), create=True)
self._stage_sources_at(host_vdirectory, mount_workspaces=mount_workspaces)
# _stage_sources_at():
#
# Stage this element's sources to a directory
#
# Args:
# directory (str): An absolute path to stage the sources at
# vdirectory (:class:`.storage.Directory`): A virtual directory object to stage sources into.
# mount_workspaces (bool): mount workspaces if True, copy otherwise
#
def _stage_sources_at(self, directory, mount_workspaces=True):
def _stage_sources_at(self, vdirectory, mount_workspaces=True):
with self.timed_activity("Staging sources", silent_nested=True):
if os.path.isdir(directory) and os.listdir(directory):
raise ElementError("Staging directory '{}' is not empty".format(directory))
workspace = self._get_workspace()
if workspace:
# If mount_workspaces is set and we're doing incremental builds,
# the workspace is already mounted into the sandbox.
if not (mount_workspaces and self.__can_build_incrementally()):
with self.timed_activity("Staging local files at {}".format(workspace.path)):
workspace.stage(directory)
else:
# No workspace, stage directly
for source in self.sources():
source._stage(directory)
if not isinstance(vdirectory, Directory):
vdirectory = FileBasedDirectory(vdirectory)
if not vdirectory.is_empty():
raise ElementError("Staging directory '{}' is not empty".format(vdirectory))
with tempfile.TemporaryDirectory() as temp_staging_directory:
workspace = self._get_workspace()
if workspace:
# If mount_workspaces is set and we're doing incremental builds,
# the workspace is already mounted into the sandbox.
if not (mount_workspaces and self.__can_build_incrementally()):
with self.timed_activity("Staging local files at {}".format(workspace.path)):
workspace.stage(temp_staging_directory)
else:
# No workspace, stage directly
for source in self.sources():
source._stage(temp_staging_directory)
vdirectory.import_files(temp_staging_directory)
# Ensure deterministic mtime of sources at build time
utils._set_deterministic_mtime(directory)
vdirectory.set_deterministic_mtime()
# Ensure deterministic owners of sources at build time
utils._set_deterministic_user(directory)
vdirectory.set_deterministic_user()
# _set_required():
#
......@@ -1449,7 +1457,7 @@ class Element(Plugin):
with _signals.terminator(cleanup_rootdir), \
self.__sandbox(rootdir, output_file, output_file, self.__sandbox_config) as sandbox: # nopep8
sandbox_root = sandbox.get_directory()
sandbox_vroot = sandbox.get_virtual_directory()
# By default, the dynamic public data is the same as the static public data.
# The plugin's assemble() method may modify this, though.
......@@ -1479,23 +1487,24 @@ class Element(Plugin):
#
workspace = self._get_workspace()
if workspace and self.__staged_sources_directory:
sandbox_root = sandbox.get_directory()
sandbox_path = os.path.join(sandbox_root,
self.__staged_sources_directory.lstrip(os.sep))
sandbox_vroot = sandbox.get_virtual_directory()
path_components = self.__staged_sources_directory.lstrip(os.sep).split(os.sep)
sandbox_vpath = sandbox_vroot.descend(path_components)
try:
utils.copy_files(workspace.path, sandbox_path)
sandbox_vpath.import_files(workspace.path)
except UtilError as e:
self.warn("Failed to preserve workspace state for failed build sysroot: {}"
.format(e))
raise
collectdir = os.path.join(sandbox_root, collect.lstrip(os.sep))
if not os.path.exists(collectdir):
try:
collectvdir = sandbox_vroot.descend(collect.lstrip(os.sep).split(os.sep))
except VirtualDirectoryError:
raise ElementError(
"Directory '{}' was not found inside the sandbox, "
"Subdirectory '{}' of '{}' does not exist following assembly, "
"unable to collect artifact contents"
.format(collect))
.format(collect, sandbox_vroot))
# At this point, we expect an exception was raised leading to
# an error message, or we have good output to collect.
......@@ -1513,12 +1522,21 @@ class Element(Plugin):
os.mkdir(buildtreedir)
# Hard link files from collect dir to files directory
utils.link_files(collectdir, filesdir)
collectvdir.export_files(filesdir, can_link=True)
sandbox_build_dir = os.path.join(sandbox_root, self.get_variable('build-root').lstrip(os.sep))
# Hard link files from build-root dir to buildtreedir directory
if os.path.isdir(sandbox_build_dir):
utils.link_files(sandbox_build_dir, buildtreedir)
build_root_is_valid = False
try:
# Attempt to hard link files from build-root dir to buildtreedir directory
build_root = self.get_variable('build-root').lstrip(os.sep)
sandbox_build_dir = sandbox_vroot.descend(build_root.split(os.sep))
build_root_is_valid = True
except VirtualDirectoryError:
# This replaces code which previously did nothing if
# buildtreedir was not a directory, so we do the same.
pass
if build_root_is_valid:
sandbox_build_dir.export_files(buildtreedir, can_link=True, can_destroy=True)
# Copy build log
log_filename = context.get_log_filename()
......@@ -2043,7 +2061,8 @@ class Element(Plugin):
directory,
stdout=stdout,
stderr=stderr,
config=config)
config=config,
allow_real_directory=not self.BST_VIRTUAL_DIRECTORY)
yield sandbox
else:
......
......@@ -478,13 +478,15 @@ class Plugin():
silent_nested=silent_nested):
yield
def call(self, *popenargs, fail=None, **kwargs):
def call(self, *popenargs, fail=None, fail_temporarily=False, **kwargs):
"""A wrapper for subprocess.call()
Args:
popenargs (list): Popen() arguments
fail (str): A message to display if the process returns
a non zero exit code
fail_temporarily (bool): Whether any exceptions should
be raised as temporary. (*Since: 1.2*)
rest_of_args (kwargs): Remaining arguments to subprocess.call()
Returns:
......@@ -507,16 +509,18 @@ class Plugin():
"Failed to download ponies from {}".format(
self.mirror_directory))
"""
exit_code, _ = self.__call(*popenargs, fail=fail, **kwargs)
exit_code, _ = self.__call(*popenargs, fail=fail, fail_temporarily=fail_temporarily, **kwargs)
return exit_code
def check_output(self, *popenargs, fail=None, **kwargs):
def check_output(self, *popenargs, fail=None, fail_temporarily=False, **kwargs):
"""A wrapper for subprocess.check_output()
Args:
popenargs (list): Popen() arguments
fail (str): A message to display if the process returns
a non zero exit code
fail_temporarily (bool): Whether any exceptions should
be raised as temporary. (*Since: 1.2*)
rest_of_args (kwargs): Remaining arguments to subprocess.call()
Returns:
......@@ -555,7 +559,7 @@ class Plugin():
raise SourceError(
fmt.format(plugin=self, track=tracking)) from e
"""
return self.__call(*popenargs, collect_stdout=True, fail=fail, **kwargs)
return self.__call(*popenargs, collect_stdout=True, fail=fail, fail_temporarily=fail_temporarily, **kwargs)
#############################################################
# Private Methods used in BuildStream #
......@@ -619,7 +623,7 @@ class Plugin():
# Internal subprocess implementation for the call() and check_output() APIs
#
def __call(self, *popenargs, collect_stdout=False, fail=None, **kwargs):
def __call(self, *popenargs, collect_stdout=False, fail=None, fail_temporarily=False, **kwargs):
with self._output_file() as output_file:
if 'stdout' not in kwargs:
......@@ -634,7 +638,8 @@ class Plugin():
exit_code, output = utils._call(*popenargs, **kwargs)
if fail and exit_code:
raise PluginError("{plugin}: {message}".format(plugin=self, message=fail))
raise PluginError("{plugin}: {message}".format(plugin=self, message=fail),
temporary=fail_temporarily)
return (exit_code, output)
......
......@@ -34,7 +34,6 @@ The default configuration and possible options are as such:
"""
import os
from buildstream import utils
from buildstream import Element, Scope
......@@ -56,6 +55,9 @@ class ComposeElement(Element):
# added, to reduce the potential for confusion
BST_FORBID_SOURCES = True
# This plugin has been modified to avoid the use of Sandbox.get_directory
BST_VIRTUAL_DIRECTORY = True
def configure(self, node):
self.node_validate(node, [
'integrate', 'include', 'exclude', 'include-orphans'
......@@ -104,7 +106,8 @@ class ComposeElement(Element):
orphans=self.include_orphans)
manifest.update(files)
basedir = sandbox.get_directory()
# Make a snapshot of all the files.
vbasedir = sandbox.get_virtual_directory()
modified_files = set()
removed_files = set()
added_files = set()
......@@ -116,38 +119,24 @@ class ComposeElement(Element):
if require_split:
# Make a snapshot of all the files before integration-commands are run.
snapshot = {
f: getmtime(os.path.join(basedir, f))
for f in utils.list_relative_paths(basedir)
}
snapshot = set(vbasedir.list_relative_paths())
vbasedir.mark_unmodified()
for dep in self.dependencies(Scope.BUILD):
dep.integrate(sandbox)
if require_split:
# Calculate added, modified and removed files
basedir_contents = set(utils.list_relative_paths(basedir))
post_integration_snapshot = vbasedir.list_relative_paths()
modified_files = set(vbasedir.list_modified_paths())
basedir_contents = set(post_integration_snapshot)
for path in manifest:
if path in basedir_contents:
if path in snapshot:
preintegration_mtime = snapshot[path]
if preintegration_mtime != getmtime(os.path.join(basedir, path)):
modified_files.add(path)
else:
# If the path appears in the manifest but not the initial snapshot,
# it may be a file staged inside a directory symlink. In this case
# the path we got from the manifest won't show up in the snapshot
# because utils.list_relative_paths() doesn't recurse into symlink
# directories.
pass
elif path in snapshot:
if path in snapshot and path not in basedir_contents:
removed_files.add(path)
for path in basedir_contents:
if path not in snapshot:
added_files.add(path)
self.info("Integration modified {}, added {} and removed {} files"
.format(len(modified_files), len(added_files), len(removed_files)))
......@@ -166,8 +155,7 @@ class ComposeElement(Element):
# instead of into a subdir. The element assemble() method should
# support this in some way.
#
installdir = os.path.join(basedir, 'buildstream', 'install')
os.makedirs(installdir, exist_ok=True)
installdir = vbasedir.descend(['buildstream', 'install'], create=True)
# We already saved the manifest for created files in the integration phase,
# now collect the rest of the manifest.
......@@ -191,19 +179,12 @@ class ComposeElement(Element):
with self.timed_activity("Creating composition", detail=detail, silent_nested=True):
self.info("Composing {} files".format(len(manifest)))
utils.link_files(basedir, installdir, files=manifest)
installdir.import_files(vbasedir, files=manifest, can_link=True)
# And we're done
return os.path.join(os.sep, 'buildstream', 'install')
# Like os.path.getmtime(), but doesnt explode on symlinks
#
def getmtime(path):
stat = os.lstat(path)
return stat.st_mtime
# Plugin entry point
def setup():
return ComposeElement
......@@ -31,7 +31,6 @@ The empty configuration is as such:
"""
import os
import shutil
from buildstream import Element, BuildElement, ElementError
......@@ -39,6 +38,9 @@ from buildstream import Element, BuildElement, ElementError
class ImportElement(BuildElement):
# pylint: disable=attribute-defined-outside-init
# This plugin has been modified to avoid the use of Sandbox.get_directory
BST_VIRTUAL_DIRECTORY = True
def configure(self, node):
self.source = self.node_subst_member(node, 'source')
self.target = self.node_subst_member(node, 'target')
......@@ -68,27 +70,22 @@ class ImportElement(BuildElement):
# Do not mount workspaces as the files are copied from outside the sandbox
self._stage_sources_in_sandbox(sandbox, 'input', mount_workspaces=False)
rootdir = sandbox.get_directory()
inputdir = os.path.join(rootdir, 'input')
outputdir = os.path.join(rootdir, 'output')
rootdir = sandbox.get_virtual_directory()
inputdir = rootdir.descend(['input'])
outputdir = rootdir.descend(['output'], create=True)
# The directory to grab
inputdir = os.path.join(inputdir, self.source.lstrip(os.sep))
inputdir = inputdir.rstrip(os.sep)
inputdir = inputdir.descend(self.source.strip(os.sep).split(os.sep))
# The output target directory
outputdir = os.path.join(outputdir, self.target.lstrip(os.sep))
outputdir = outputdir.rstrip(os.sep)
# Ensure target directory parent
os.makedirs(os.path.dirname(outputdir), exist_ok=True)
outputdir = outputdir.descend(self.target.strip(os.sep).split(os.sep), create=True)
if not os.path.exists(inputdir):
if inputdir.is_empty():
raise ElementError("{}: No files were found inside directory '{}'"
.format(self, self.source))
# Move it over
shutil.move(inputdir, outputdir)
outputdir.import_files(inputdir)
# And we're done
return '/output'
......
......@@ -24,13 +24,15 @@ Stack elements are simply a symbolic element used for representing
a logical group of elements.
"""
import os
from buildstream import Element
# Element implementation for the 'stack' kind.
class StackElement(Element):
# This plugin has been modified to avoid the use of Sandbox.get_directory
BST_VIRTUAL_DIRECTORY = True
def configure(self, node):
pass
......@@ -52,7 +54,7 @@ class StackElement(Element):
# Just create a dummy empty artifact, its existence is a statement
# that all this stack's dependencies are built.
rootdir = sandbox.get_directory()
vrootdir = sandbox.get_virtual_directory()
# XXX FIXME: This is currently needed because the artifact
# cache wont let us commit an empty artifact.
......@@ -61,10 +63,7 @@ class StackElement(Element):
# the actual artifact data in a subdirectory, then we
# will be able to store some additional state in the
# artifact cache, and we can also remove this hack.
outputdir = os.path.join(rootdir, 'output', 'bst')
# Ensure target directory parent
os.makedirs(os.path.dirname(outputdir), exist_ok=True)
vrootdir.descend(['output', 'bst'], create=True)
# And we're done
return '/output'
......
......@@ -150,11 +150,11 @@ class DownloadableFileSource(Source):
# we would have downloaded.
return self.ref
raise SourceError("{}: Error mirroring {}: {}"
.format(self, self.url, e)) from e
.format(self, self.url, e), temporary=True) from e
except (urllib.error.URLError, urllib.error.ContentTooShortError, OSError) as e:
raise SourceError("{}: Error mirroring {}: {}"
.format(self, self.url, e)) from e
.format(self, self.url, e), temporary=True) from e
def _get_mirror_dir(self):
return os.path.join(self.get_mirror_directory(),
......
......@@ -113,7 +113,8 @@ class GitMirror():
#
with self.source.tempdir() as tmpdir:
self.source.call([self.source.host_git, 'clone', '--mirror', '-n', self.url, tmpdir],
fail="Failed to clone git repository {}".format(self.url))
fail="Failed to clone git repository {}".format(self.url),
fail_temporarily=True)
try:
shutil.move(tmpdir, self.mirror)
......@@ -124,6 +125,7 @@ class GitMirror():
def fetch(self):
self.source.call([self.source.host_git, 'fetch', 'origin', '--prune'],
fail="Failed to fetch from remote git repository: {}".format(self.url),
fail_temporarily=True,
cwd=self.mirror)
def has_ref(self):
......@@ -157,7 +159,8 @@ class GitMirror():
# case we're just checking out a specific commit and then removing the .git/
# directory.
self.source.call([self.source.host_git, 'clone', '--no-checkout', '--shared', self.mirror, fullpath],
fail="Failed to create git mirror {} in directory: {}".format(self.mirror, fullpath))
fail="Failed to create git mirror {} in directory: {}".format(self.mirror, fullpath),
fail_temporarily=True)
self.source.call([self.source.host_git, 'checkout', '--force', self.ref],
fail="Failed to checkout git ref {}".format(self.ref),
......@@ -170,7 +173,8 @@ class GitMirror():
fullpath = os.path.join(directory, self.path)
self.source.call([self.source.host_git, 'clone', '--no-checkout', self.mirror, fullpath],
fail="Failed to clone git mirror {} in directory: {}".format(self.mirror, fullpath))
fail="Failed to clone git mirror {} in directory: {}".format(self.mirror, fullpath),
fail_temporarily=True)
self.source.call([self.source.host_git, 'remote', 'set-url', 'origin', self.url],
fail='Failed to add remote origin "{}"'.format(self.url),
......
#
# Copyright Bloomberg Finance LP
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library. If not, see <http://www.gnu.org/licenses/>.
#
# Authors:
# Ed Baunton <ebaunton1@bloomberg.net>
"""
remote - stage files from remote urls
=====================================
**Usage:**
.. code:: yaml
# Specify the remote source kind
kind: remote
# Optionally specify a relative staging directory
# directory: path/to/stage
# Optionally specify a relative staging filename.
# If not specified, the basename of the url will be used.
# filename: customfilename
# Specify the url. Using an alias defined in your project
# configuration is encouraged. 'bst track' will update the
# sha256sum in 'ref' to the downloaded file's sha256sum.
url: upstream:foo
# Specify the ref. It's a sha256sum of the file you download.
ref: 6c9f6f68a131ec6381da82f2bff978083ed7f4f7991d931bfa767b7965ebc94b
.. note::
The ``remote`` plugin is available since :ref:`format version 10 <project_format_version>`
"""
import os
from buildstream import SourceError, utils
from ._downloadablefilesource import DownloadableFileSource
class RemoteSource(DownloadableFileSource):
# pylint: disable=attribute-defined-outside-init
def configure(self, node):
super().configure(node)
self.filename = self.node_get_member(node, str, 'filename', os.path.basename(self.url))
if os.sep in self.filename:
raise SourceError('{}: filename parameter cannot contain directories'.format(self),
reason="filename-contains-directory")
self.node_validate(node, DownloadableFileSource.COMMON_CONFIG_KEYS + ['filename'])
def get_unique_key(self):
return super().get_unique_key() + [self.filename]
def stage(self, directory):
# Same as in local plugin, don't use hardlinks to stage sources, they
# are not write protected in the sandbox.
dest = os.path.join(directory, self.filename)
with self.timed_activity("Staging remote file to {}".format(dest)):
utils.safe_copy(self._get_mirror_file(), dest)
def setup():
return RemoteSource
......@@ -32,7 +32,8 @@ from .._fuse import SafeHardlinks
class Mount():
def __init__(self, sandbox, mount_point, safe_hardlinks):
scratch_directory = sandbox._get_scratch_directory()
root_directory = sandbox.get_directory()
# Getting external_directory here is acceptable as we're part of the sandbox code.
root_directory = sandbox.get_virtual_directory().external_directory
self.mount_point = mount_point
self.safe_hardlinks = safe_hardlinks
......
......@@ -56,7 +56,9 @@ class SandboxBwrap(Sandbox):
def run(self, command, flags, *, cwd=None, env=None):
stdout, stderr = self._get_output()
root_directory = self.get_directory()
# Allowable access to underlying storage as we're part of the sandbox
root_directory = self.get_virtual_directory().external_directory
# Fallback to the sandbox default settings for
# the cwd and env.
......
......@@ -90,7 +90,7 @@ class SandboxChroot(Sandbox):
# Nonetheless a better solution could perhaps be found.
rootfs = stack.enter_context(utils._tempdir(dir='/var/run/buildstream'))
stack.enter_context(self.create_devices(self.get_directory(), flags))
stack.enter_context(self.create_devices(self._root, flags))
stack.enter_context(self.mount_dirs(rootfs, flags, stdout, stderr))
if flags & SandboxFlags.INTERACTIVE:
......
......@@ -29,7 +29,8 @@ See also: :ref:`sandboxing`.
"""
import os
from .._exceptions import ImplError
from .._exceptions import ImplError, BstError
from ..storage._filebaseddirectory import FileBasedDirectory
class SandboxFlags():
......@@ -90,28 +91,63 @@ class Sandbox():
self.__cwd = None
self.__env = None
self.__mount_sources = {}
self.__allow_real_directory = kwargs['allow_real_directory']
# Configuration from kwargs common to all subclasses
self.__config = kwargs['config']
self.__stdout = kwargs['stdout']
self.__stderr = kwargs['stderr']
# Setup the directories
# Setup the directories. Root should be available to subclasses, hence
# being single-underscore. The others are private to this class.
self._root = os.path.join(directory, 'root')
self.__directory = directory
self.__root = os.path.join(self.__directory, 'root')
self.__scratch = os.path.join(self.__directory, 'scratch')
for directory_ in [self.__root, self.__scratch]:
for directory_ in [self._root, self.__scratch]:
os.makedirs(directory_, exist_ok=True)
def get_directory(self):
"""Fetches the sandbox root directory
The root directory is where artifacts for the base
runtime environment should be staged.
runtime environment should be staged. Only works if
BST_VIRTUAL_DIRECTORY is not set.
Returns:
(str): The sandbox root directory
"""
if self.__allow_real_directory:
return self._root
else:
raise BstError("You can't use get_directory")
def get_virtual_directory(self):
"""Fetches the sandbox root directory
The root directory is where artifacts for the base
runtime environment should be staged. Only works if
BST_VIRTUAL_DIRECTORY is not set.
Returns:
(str): The sandbox root directory
"""
# For now, just create a new Directory every time we're asked
return FileBasedDirectory(self._root)
def get_virtual_toplevel_directory(self):
"""Fetches the sandbox's toplevel directory
The toplevel directory contains 'root', 'scratch' and later
'artifact' where output is copied to.
Returns:
(str): The sandbox toplevel directory
"""
return self.__root
# For now, just create a new Directory every time we're asked
return FileBasedDirectory(self.__directory)
def set_environment(self, environment):
"""Sets the environment variables for the sandbox
......@@ -293,11 +329,11 @@ class Sandbox():
def _has_command(self, command, env=None):
if os.path.isabs(command):
return os.path.exists(os.path.join(
self.get_directory(), command.lstrip(os.sep)))
self._root, command.lstrip(os.sep)))
for path in env.get('PATH').split(':'):
if os.path.exists(os.path.join(
self.get_directory(), path.lstrip(os.sep), command)):
self._root, path.lstrip(os.sep), command)):
return True
return False