Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • willsalmon/buildstream
  • CumHoleZH/buildstream
  • tchaik/buildstream
  • DCotyPortfolio/buildstream
  • jesusoctavioas/buildstream
  • patrickmmartin/buildstream
  • franred/buildstream
  • tintou/buildstream
  • alatiera/buildstream
  • martinblanchard/buildstream
  • neverdie22042524/buildstream
  • Mattlk13/buildstream
  • PServers/buildstream
  • phamnghia610909/buildstream
  • chiaratolentino/buildstream
  • eysz7-x-x/buildstream
  • kerrick1/buildstream
  • matthew-yates/buildstream
  • twofeathers/buildstream
  • mhadjimichael/buildstream
  • pointswaves/buildstream
  • Mr.JackWilson/buildstream
  • Tw3akG33k/buildstream
  • AlexFazakas/buildstream
  • eruidfkiy/buildstream
  • clamotion2/buildstream
  • nanonyme/buildstream
  • wickyjaaa/buildstream
  • nmanchev/buildstream
  • bojorquez.ja/buildstream
  • mostynb/buildstream
  • highpit74/buildstream
  • Demo112/buildstream
  • ba2014sheer/buildstream
  • tonimadrino/buildstream
  • usuario2o/buildstream
  • Angelika123456/buildstream
  • neo355/buildstream
  • corentin-ferlay/buildstream
  • coldtom/buildstream
  • wifitvbox81/buildstream
  • 358253885/buildstream
  • seanborg/buildstream
  • SotK/buildstream
  • DouglasWinship/buildstream
  • karansthr97/buildstream
  • louib/buildstream
  • bwh-ct/buildstream
  • robjh/buildstream
  • we88c0de/buildstream
  • zhengxian5555/buildstream
51 results
Show changes
Commits on Source (85)
Showing
with 682 additions and 100 deletions
......@@ -15,6 +15,7 @@ tmp
.coverage
.coverage.*
.cache
.pytest_cache/
*.bst/
# Pycache, in case buildstream is ran directly from within the source
......
image: buildstream/testsuite-debian:8-master-88-4d92c106
image: buildstream/testsuite-debian:9-master-102-9067e269
cache:
key: "$CI_JOB_NAME-"
......@@ -88,22 +88,19 @@ source_dist:
paths:
- coverage-linux/
tests-debian-8:
<<: *linux-tests
tests-debian-9:
image: buildstream/buildstream-debian:master-88-4d92c106
image: buildstream/testsuite-debian:9-master-102-9067e269
<<: *linux-tests
tests-fedora-27:
image: buildstream/buildstream-fedora:master-88-4d92c106
image: buildstream/testsuite-fedora:27-master-102-9067e269
<<: *linux-tests
tests-unix:
# Use fedora here, to a) run a test on fedora and b) ensure that we
# can get rid of ostree - this is not possible with debian-8
image: buildstream/buildstream-fedora:master-88-4d92c106
image: buildstream/testsuite-fedora:27-master-102-9067e269
stage: test
variables:
BST_FORCE_BACKEND: "unix"
......@@ -223,7 +220,6 @@ coverage:
- coverage combine --rcfile=../.coveragerc -a coverage.*
- coverage report --rcfile=../.coveragerc -m
dependencies:
- tests-debian-8
- tests-debian-9
- tests-fedora-27
- tests-unix
......
......@@ -23,26 +23,31 @@ a reasonable timeframe for identifying these.
Patch submissions
-----------------
Branches must be submitted as merge requests in gitlab and should usually
be associated to an issue report on gitlab.
Branches must be submitted as merge requests in gitlab. If the branch
fixes an issue or is related to any issues, these issues must be mentioned
in the merge request or preferably the commit messages themselves.
Commits in the branch which address specific issues must specify the
issue number in the commit message.
Branch names for merge requests should be prefixed with the submitter's
name or nickname, e.g. ``username/implement-flying-ponies``.
Merge requests that are not yet ready for review must be prefixed with the
``WIP:`` identifier. A merge request is not ready for review until the
submitter expects that the patch is ready to actually land.
You may open merge requests for the branches you create before you
are ready to have them reviewed upstream, as long as your merge request
is not yet ready for review then it must be prefixed with the ``WIP:``
identifier.
Submitted branches must not contain a history of the work done in the
feature branch. Please use git's interactive rebase feature in order to
compose a clean patch series suitable for submission.
We prefer that test case and documentation changes be submitted
in separate commits from the code changes which they test.
We prefer that documentation changes be submitted in separate commits from
the code changes which they document, and new test cases are also preferred
in separate commits.
Ideally every commit in the history of master passes its test cases. This
makes bisections more easy to perform, but is not always practical with
more complex branches.
If a commit in your branch modifies behavior such that a test must also
be changed to match the new behavior, then the tests should be updated
with the same commit. Ideally every commit in the history of master passes
its test cases, this makes bisections more easy to perform, but is not
always practical with more complex branches.
Commit messages
......@@ -54,9 +59,6 @@ the change.
The summary line must start with what changed, followed by a colon and
a very brief description of the change.
If there is an associated issue, it **must** be mentioned somewhere
in the commit message.
**Example**::
element.py: Added the frobnicator so that foos are properly frobbed.
......@@ -65,8 +67,6 @@ in the commit message.
the element. Elements that are not properly frobnicated raise
an error to inform the user of invalid frobnication rules.
This fixes issue #123
Coding style
------------
......@@ -294,7 +294,7 @@ committed with that.
To do this, first ensure you have ``click_man`` installed, possibly
with::
pip install --user click_man
pip3 install --user click_man
Then, in the toplevel directory of buildstream, run the following::
......@@ -450,7 +450,7 @@ To run the tests, just type::
At the toplevel.
When debugging a test, it can be desirable to see the stdout
and stderr generated by a test, to do this use the --addopts
and stderr generated by a test, to do this use the ``--addopts``
function to feed arguments to pytest as such::
./setup.py test --addopts -s
......@@ -530,7 +530,7 @@ tool.
Python provides `cProfile <https://docs.python.org/3/library/profile.html>`_
which gives you a list of all functions called during execution and how much
time was spent in each function. Here is an example of running `bst --help`
time was spent in each function. Here is an example of running ``bst --help``
under cProfile:
python3 -m cProfile -o bst.cprofile -- $(which bst) --help
......
=================
buildstream 1.3.1
=================
o Add a `--tar` option to `bst checkout` which allows a tarball to be
created from the artifact contents.
o Fetching and tracking will consult mirrors defined in project config,
and the preferred mirror to fetch from can be defined in the command
line or user config.
=================
buildstream 1.1.4
=================
......@@ -13,6 +24,12 @@ buildstream 1.1.4
Artifact servers need to be migrated.
o BuildStream now requires python version >= 3.5
o BuildStream will now automatically clean up old artifacts when it
runs out of space. The exact behavior is configurable in the user's
buildstream.conf.
=================
buildstream 1.1.3
......
......@@ -25,7 +25,7 @@ BuildStream offers the following advantages:
* **Declarative build instructions/definitions**
BuildStream provides a a flexible and extensible framework for the modelling
BuildStream provides a flexible and extensible framework for the modelling
of software build pipelines in a declarative YAML format, which allows you to
manipulate filesystem data in a controlled, reproducible sandboxed environment.
......@@ -61,25 +61,29 @@ How does BuildStream work?
==========================
BuildStream operates on a set of YAML files (.bst files), as follows:
* loads the YAML files which describe the target(s) and all dependencies
* evaluates the version information and build instructions to calculate a build
* Loads the YAML files which describe the target(s) and all dependencies.
* Evaluates the version information and build instructions to calculate a build
graph for the target(s) and all dependencies and unique cache-keys for each
element
* retrieves elements from cache if they are already built, or builds them in a
sandboxed environment using the instructions declared in the .bst files
* transforms/configures and/or deploys the resulting target(s) based on the
element.
* Retrieves previously built elements (artifacts) from a local/remote cache, or
builds the elements in a sandboxed environment using the instructions declared
in the .bst files.
* Transforms/configures and/or deploys the resulting target(s) based on the
instructions declared in the .bst files.
How can I get started?
======================
The easiest way to get started is to explore some existing .bst files, for example:
To start using BuildStream, first,
`install <https://buildstream.gitlab.io/buildstream/main_install.html>`_
BuildStream onto your machine and then follow our
`tutorial <https://buildstream.gitlab.io/buildstream/using_tutorial.html>`_.
We also recommend exploring some existing BuildStream projects:
* https://gitlab.gnome.org/GNOME/gnome-build-meta/
* https://gitlab.com/freedesktop-sdk/freedesktop-sdk
* https://gitlab.com/baserock/definitions
* https://gitlab.com/BuildStream/buildstream-examples/tree/master/build-x86image
* https://gitlab.com/BuildStream/buildstream-examples/tree/master/netsurf-flatpak
If you have any questions please ask on our `#buildstream <irc://irc.gnome.org/buildstream>`_ channel in `irc.gnome.org <irc://irc.gnome.org>`_
......@@ -29,7 +29,7 @@ if "_BST_COMPLETION" not in os.environ:
from .utils import UtilError, ProgramNotFoundError
from .sandbox import Sandbox, SandboxFlags
from .plugin import Plugin
from .source import Source, SourceError, Consistency
from .source import Source, SourceError, Consistency, SourceFetcher
from .element import Element, ElementError, Scope
from .buildelement import BuildElement
from .scriptelement import ScriptElement
......@@ -21,7 +21,8 @@ import os
import string
from collections import Mapping, namedtuple
from .._exceptions import ImplError, LoadError, LoadErrorReason
from ..element import _KeyStrength
from .._exceptions import ArtifactError, ImplError, LoadError, LoadErrorReason
from .._message import Message, MessageType
from .. import utils
from .. import _yaml
......@@ -77,11 +78,16 @@ ArtifactCacheSpec.__new__.__defaults__ = (None, None, None)
class ArtifactCache():
def __init__(self, context):
self.context = context
self.required_artifacts = set()
self.extractdir = os.path.join(context.artifactdir, 'extract')
self.max_size = context.cache_quota
self.estimated_size = None
self.global_remote_specs = []
self.project_remote_specs = {}
self._local = False
self.cache_size = None
os.makedirs(context.artifactdir, exist_ok=True)
......@@ -179,10 +185,119 @@ class ArtifactCache():
(str(provenance)))
return cache_specs
# append_required_artifacts():
#
# Append to the list of elements whose artifacts are required for
# the current run. Artifacts whose elements are in this list will
# be locked by the artifact cache and not touched for the duration
# of the current pipeline.
#
# Args:
# elements (iterable): A set of elements to mark as required
#
def append_required_artifacts(self, elements):
# We lock both strong and weak keys - deleting one but not the
# other won't save space in most cases anyway, but would be a
# user inconvenience.
for element in elements:
strong_key = element._get_cache_key(strength=_KeyStrength.STRONG)
weak_key = element._get_cache_key(strength=_KeyStrength.WEAK)
for key in (strong_key, weak_key):
if key and key not in self.required_artifacts:
self.required_artifacts.add(key)
# We also update the usage times of any artifacts
# we will be using, which helps preventing a
# buildstream process that runs in parallel with
# this one from removing artifacts in-use.
try:
self.update_atime(key)
except ArtifactError:
pass
# clean():
#
# Clean the artifact cache as much as possible.
#
def clean(self):
artifacts = self.list_artifacts()
while self.calculate_cache_size() >= self.context.cache_quota - self.context.cache_lower_threshold:
try:
to_remove = artifacts.pop(0)
except IndexError:
# If too many artifacts are required, and we therefore
# can't remove them, we have to abort the build.
#
# FIXME: Asking the user what to do may be neater
default_conf = os.path.join(os.environ['XDG_CONFIG_HOME'],
'buildstream.conf')
detail = ("There is not enough space to build the given element.\n"
"Please increase the cache-quota in {}."
.format(self.context.config_origin or default_conf))
if self.calculate_cache_size() > self.context.cache_quota:
raise ArtifactError("Cache too full. Aborting.",
detail=detail,
reason="cache-too-full")
else:
break
key = to_remove.rpartition('/')[2]
if key not in self.required_artifacts:
size = self.remove(to_remove)
if size:
self.cache_size -= size
# This should be O(1) if implemented correctly
return self.calculate_cache_size()
# get_approximate_cache_size()
#
# A cheap method that aims to serve as an upper limit on the
# artifact cache size.
#
# The cache size reported by this function will normally be larger
# than the real cache size, since it is calculated using the
# pre-commit artifact size, but for very small artifacts in
# certain caches additional overhead could cause this to be
# smaller than, but close to, the actual size.
#
# Nonetheless, in practice this should be safe to use as an upper
# limit on the cache size.
#
# If the cache has built-in constant-time size reporting, please
# feel free to override this method with a more accurate
# implementation.
#
# Returns:
# (int) An approximation of the artifact cache size.
#
def get_approximate_cache_size(self):
# If we don't currently have an estimate, figure out the real
# cache size.
if self.estimated_size is None:
self.estimated_size = self.calculate_cache_size()
return self.estimated_size
################################################
# Abstract methods for subclasses to implement #
################################################
# update_atime()
#
# Update the atime of an artifact.
#
# Args:
# key (str): The key of the artifact.
#
def update_atime(self, key):
raise ImplError("Cache '{kind}' does not implement contains()"
.format(kind=type(self).__name__))
# initialize_remotes():
#
# This will contact each remote cache.
......@@ -208,6 +323,32 @@ class ArtifactCache():
raise ImplError("Cache '{kind}' does not implement contains()"
.format(kind=type(self).__name__))
# list_artifacts():
#
# List artifacts in this cache in LRU order.
#
# Returns:
# ([str]) - A list of artifact names as generated by
# `ArtifactCache.get_artifact_fullname` in LRU order
#
def list_artifacts(self):
raise ImplError("Cache '{kind}' does not implement list_artifacts()"
.format(kind=type(self).__name__))
# remove():
#
# Removes the artifact for the specified ref from the local
# artifact cache.
#
# Args:
# ref (artifact_name): The name of the artifact to remove (as
# generated by
# `ArtifactCache.get_artifact_fullname`)
#
def remove(self, artifact_name):
raise ImplError("Cache '{kind}' does not implement remove()"
.format(kind=type(self).__name__))
# extract():
#
# Extract cached artifact for the specified Element if it hasn't
......@@ -328,6 +469,20 @@ class ArtifactCache():
raise ImplError("Cache '{kind}' does not implement link_key()"
.format(kind=type(self).__name__))
# calculate_cache_size()
#
# Return the real artifact cache size.
#
# Implementations should also use this to update estimated_size.
#
# Returns:
#
# (int) The size of the artifact cache.
#
def calculate_cache_size(self):
raise ImplError("Cache '{kind}' does not implement calculate_cache_size()"
.format(kind=type(self).__name__))
################################################
# Local Private Methods #
################################################
......@@ -369,6 +524,30 @@ class ArtifactCache():
with self.context.timed_activity("Initializing remote caches", silent_nested=True):
self.initialize_remotes(on_failure=remote_failed)
# _add_artifact_size()
#
# Since we cannot keep track of the cache size between threads,
# this method will be called by the main process every time a
# process that added something to the cache finishes.
#
# This will then add the reported size to
# ArtifactCache.estimated_size.
#
def _add_artifact_size(self, artifact_size):
if not self.estimated_size:
self.estimated_size = self.calculate_cache_size()
self.estimated_size += artifact_size
# _set_cache_size()
#
# Similarly to the above method, when we calculate the actual size
# in a child thread, we can't update it. We instead pass the value
# back to the main thread and update it there.
#
def _set_cache_size(self, cache_size):
self.estimated_size = cache_size
# _configured_remote_artifact_cache_specs():
#
......
......@@ -77,7 +77,7 @@ class CASCache(ArtifactCache):
def extract(self, element, key):
ref = self.get_artifact_fullname(element, key)
tree = self.resolve_ref(ref)
tree = self.resolve_ref(ref, update_mtime=True)
dest = os.path.join(self.extractdir, element._get_project().name, element.normal_name, tree.hash)
if os.path.isdir(dest):
......@@ -113,6 +113,8 @@ class CASCache(ArtifactCache):
for ref in refs:
self.set_ref(ref, tree)
self.cache_size = None
def diff(self, element, key_a, key_b, *, subdir=None):
ref_a = self.get_artifact_fullname(element, key_a)
ref_b = self.get_artifact_fullname(element, key_b)
......@@ -219,6 +221,8 @@ class CASCache(ArtifactCache):
try:
remote.init()
element.info("Pulling {} <- {}".format(element._get_brief_display_key(), remote.spec.url))
request = buildstream_pb2.GetReferenceRequest()
request.key = ref
response = remote.ref_storage.GetReference(request)
......@@ -236,7 +240,8 @@ class CASCache(ArtifactCache):
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.NOT_FOUND:
raise
raise ArtifactError("Failed to pull artifact {}: {}".format(
element._get_brief_display_key(), e)) from e
return False
......@@ -260,6 +265,8 @@ class CASCache(ArtifactCache):
for remote in push_remotes:
remote.init()
element.info("Pushing {} -> {}".format(element._get_brief_display_key(), remote.spec.url))
try:
for ref in refs:
tree = self.resolve_ref(ref)
......@@ -273,10 +280,13 @@ class CASCache(ArtifactCache):
if response.digest.hash == tree.hash and response.digest.size_bytes == tree.size_bytes:
# ref is already on the server with the same tree
element.info("Skipping {}, remote ({}) already has artifact cached".format(
element._get_brief_display_key(), remote.spec.url))
continue
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.NOT_FOUND:
# Intentionally re-raise RpcError for outer except block.
raise
missing_blobs = {}
......@@ -332,7 +342,7 @@ class CASCache(ArtifactCache):
except grpc.RpcError as e:
if e.code() != grpc.StatusCode.RESOURCE_EXHAUSTED:
raise ArtifactError("Failed to push artifact {}: {}".format(refs, e)) from e
raise ArtifactError("Failed to push artifact {}: {}".format(refs, e), temporary=True) from e
return pushed
......@@ -448,6 +458,19 @@ class CASCache(ArtifactCache):
except FileNotFoundError as e:
raise ArtifactError("Attempt to access unavailable artifact: {}".format(e)) from e
def update_atime(self, ref):
try:
os.utime(self._refpath(ref))
except FileNotFoundError as e:
raise ArtifactError("Attempt to access unavailable artifact: {}".format(e)) from e
def calculate_cache_size(self):
if self.cache_size is None:
self.cache_size = utils._get_dir_size(self.casdir)
self.estimated_size = self.cache_size
return self.cache_size
# list_artifacts():
#
# List cached artifacts in Least Recently Modified (LRM) order.
......
......@@ -21,6 +21,7 @@ import os
import datetime
from collections import deque, Mapping
from contextlib import contextmanager
from . import utils
from . import _cachekey
from . import _signals
from . import _site
......@@ -30,6 +31,7 @@ from ._message import Message, MessageType
from ._profile import Topics, profile_start, profile_end
from ._artifactcache import ArtifactCache
from ._workspaces import Workspaces
from .plugin import _plugin_lookup
# Context()
......@@ -62,6 +64,12 @@ class Context():
# The locations from which to push and pull prebuilt artifacts
self.artifact_cache_specs = []
# The artifact cache quota
self.cache_quota = None
# The lower threshold to which we aim to reduce the cache size
self.cache_lower_threshold = None
# The directory to store build logs
self.logdir = None
......@@ -114,6 +122,8 @@ class Context():
self._projects = []
self._project_overrides = {}
self._workspaces = None
self._log_handle = None
self._log_filename = None
# load()
#
......@@ -153,6 +163,7 @@ class Context():
_yaml.node_validate(defaults, [
'sourcedir', 'builddir', 'artifactdir', 'logdir',
'scheduler', 'artifacts', 'logging', 'projects',
'cache'
])
for directory in ['sourcedir', 'builddir', 'artifactdir', 'logdir']:
......@@ -165,6 +176,53 @@ class Context():
path = os.path.normpath(path)
setattr(self, directory, path)
# Load quota configuration
# We need to find the first existing directory in the path of
# our artifactdir - the artifactdir may not have been created
# yet.
cache = _yaml.node_get(defaults, Mapping, 'cache')
_yaml.node_validate(cache, ['quota'])
artifactdir_volume = self.artifactdir
while not os.path.exists(artifactdir_volume):
artifactdir_volume = os.path.dirname(artifactdir_volume)
# We read and parse the cache quota as specified by the user
cache_quota = _yaml.node_get(cache, str, 'quota', default_value='infinity')
try:
cache_quota = utils._parse_size(cache_quota, artifactdir_volume)
except utils.UtilError as e:
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}\nPlease specify the value in bytes or as a % of full disk space.\n"
"\nValid values are, for example: 800M 10G 1T 50%\n"
.format(str(e))) from e
# If we are asked not to set a quota, we set it to the maximum
# disk space available minus a headroom of 2GB, such that we
# at least try to avoid raising Exceptions.
#
# Of course, we might still end up running out during a build
# if we end up writing more than 2G, but hey, this stuff is
# already really fuzzy.
#
if cache_quota is None:
stat = os.statvfs(artifactdir_volume)
# Again, the artifact directory may not yet have been
# created
if not os.path.exists(self.artifactdir):
cache_size = 0
else:
cache_size = utils._get_dir_size(self.artifactdir)
cache_quota = cache_size + stat.f_bsize * stat.f_bavail
if 'BST_TEST_SUITE' in os.environ:
headroom = 0
else:
headroom = 2e9
self.cache_quota = cache_quota - headroom
self.cache_lower_threshold = self.cache_quota / 2
# Load artifact share configuration
self.artifact_cache_specs = ArtifactCache.specs_from_config_node(defaults)
......@@ -201,7 +259,7 @@ class Context():
# Shallow validation of overrides, parts of buildstream which rely
# on the overrides are expected to validate elsewhere.
for _, overrides in _yaml.node_items(self._project_overrides):
_yaml.node_validate(overrides, ['artifacts', 'options', 'strict'])
_yaml.node_validate(overrides, ['artifacts', 'options', 'strict', 'default-mirror'])
profile_end(Topics.LOAD_CONTEXT, 'load')
......@@ -330,8 +388,11 @@ class Context():
if message.depth is None:
message.depth = len(list(self._message_depth))
# If we are recording messages, dump a copy into the open log file.
self._record_message(message)
# Send it off to the log handler (can be the frontend,
# or it can be the child task which will log and propagate
# or it can be the child task which will propagate
# to the frontend)
assert self._message_handler
......@@ -401,6 +462,137 @@ class Context():
self._pop_message_depth()
self.message(message)
# recorded_messages()
#
# Records all messages in a log file while the context manager
# is active.
#
# In addition to automatically writing all messages to the
# specified logging file, an open file handle for process stdout
# and stderr will be available via the Context.get_log_handle() API,
# and the full logfile path will be available via the
# Context.get_log_filename() API.
#
# Args:
# filename (str): A logging directory relative filename,
# the pid and .log extension will be automatically
# appended
#
# Yields:
# (str): The fully qualified log filename
#
@contextmanager
def recorded_messages(self, filename):
# We dont allow recursing in this context manager, and
# we also do not allow it in the main process.
assert self._log_handle is None
assert self._log_filename is None
assert not utils._is_main_process()
# Create the fully qualified logfile in the log directory,
# appending the pid and .log extension at the end.
self._log_filename = os.path.join(self.logdir,
'{}.{}.log'.format(filename, os.getpid()))
# Ensure the directory exists first
directory = os.path.dirname(self._log_filename)
os.makedirs(directory, exist_ok=True)
with open(self._log_filename, 'a') as logfile:
# Write one last line to the log and flush it to disk
def flush_log():
# If the process currently had something happening in the I/O stack
# then trying to reenter the I/O stack will fire a runtime error.
#
# So just try to flush as well as we can at SIGTERM time
try:
logfile.write('\n\nForcefully terminated\n')
logfile.flush()
except RuntimeError:
os.fsync(logfile.fileno())
self._log_handle = logfile
with _signals.terminator(flush_log):
yield self._log_filename
self._log_handle = None
self._log_filename = None
# get_log_handle()
#
# Fetches the active log handle, this will return the active
# log file handle when the Context.recorded_messages() context
# manager is active
#
# Returns:
# (file): The active logging file handle, or None
#
def get_log_handle(self):
return self._log_handle
# get_log_filename()
#
# Fetches the active log filename, this will return the active
# log filename when the Context.recorded_messages() context
# manager is active
#
# Returns:
# (str): The active logging filename, or None
#
def get_log_filename(self):
return self._log_filename
# _record_message()
#
# Records the message if recording is enabled
#
# Args:
# message (Message): The message to record
#
def _record_message(self, message):
if self._log_handle is None:
return
INDENT = " "
EMPTYTIME = "--:--:--"
template = "[{timecode: <8}] {type: <7}"
# If this message is associated with a plugin, print what
# we know about the plugin.
plugin_name = ""
if message.unique_id:
template += " {plugin}"
plugin = _plugin_lookup(message.unique_id)
plugin_name = plugin.name
template += ": {message}"
detail = ''
if message.detail is not None:
template += "\n\n{detail}"
detail = message.detail.rstrip('\n')
detail = INDENT + INDENT.join(detail.splitlines(True))
timecode = EMPTYTIME
if message.message_type in (MessageType.SUCCESS, MessageType.FAIL):
hours, remainder = divmod(int(message.elapsed.total_seconds()), 60**2)
minutes, seconds = divmod(remainder, 60)
timecode = "{0:02d}:{1:02d}:{2:02d}".format(hours, minutes, seconds)
text = template.format(timecode=timecode,
plugin=plugin_name,
type=message.message_type.upper(),
message=message.message,
detail=detail)
# Write to the open log file
self._log_handle.write('{}\n'.format(text))
self._log_handle.flush()
# _push_message_depth() / _pop_message_depth()
#
# For status messages, send the depth of timed
......
......@@ -99,7 +99,7 @@ class ErrorDomain(Enum):
#
class BstError(Exception):
def __init__(self, message, *, detail=None, domain=None, reason=None):
def __init__(self, message, *, detail=None, domain=None, reason=None, temporary=False):
global _last_exception
super().__init__(message)
......@@ -114,6 +114,11 @@ class BstError(Exception):
#
self.sandbox = None
# When this exception occurred during the handling of a job, indicate
# whether or not there is any point retrying the job.
#
self.temporary = temporary
# Error domain and reason
#
self.domain = domain
......@@ -131,8 +136,8 @@ class BstError(Exception):
# or by the base :class:`.Plugin` element itself.
#
class PluginError(BstError):
def __init__(self, message, reason=None):
super().__init__(message, domain=ErrorDomain.PLUGIN, reason=reason)
def __init__(self, message, reason=None, temporary=False):
super().__init__(message, domain=ErrorDomain.PLUGIN, reason=reason, temporary=False)
# LoadErrorReason
......@@ -249,8 +254,8 @@ class SandboxError(BstError):
# Raised when errors are encountered in the artifact caches
#
class ArtifactError(BstError):
def __init__(self, message, reason=None):
super().__init__(message, domain=ErrorDomain.ARTIFACT, reason=reason)
def __init__(self, message, *, detail=None, reason=None, temporary=False):
super().__init__(message, detail=detail, domain=ErrorDomain.ARTIFACT, reason=reason, temporary=True)
# PipelineError
......
......@@ -17,17 +17,16 @@
# Authors:
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
from contextlib import contextmanager
import os
import sys
import resource
import traceback
import datetime
from textwrap import TextWrapper
from contextlib import contextmanager
from blessings import Terminal
import click
from click import UsageError
from blessings import Terminal
# Import buildstream public symbols
from .. import Scope
......@@ -40,6 +39,7 @@ from .._message import Message, MessageType, unconditional_messages
from .._stream import Stream
from .._versions import BST_FORMAT_VERSION
from .. import _yaml
from .._scheduler import ElementJob
# Import frontend assets
from . import Profile, LogLine, Status
......@@ -202,7 +202,8 @@ class App():
# Load the Project
#
try:
self.project = Project(directory, self.context, cli_options=self._main_options['option'])
self.project = Project(directory, self.context, cli_options=self._main_options['option'],
default_mirror=self._main_options.get('default_mirror'))
except LoadError as e:
# Let's automatically start a `bst init` session in this case
......@@ -270,6 +271,10 @@ class App():
# Exit with the error
self._error_exit(e)
except RecursionError:
click.echo("RecursionError: Depency depth is too large. Maximum recursion depth exceeded.",
err=True)
sys.exit(-1)
else:
# No exceptions occurred, print session time and summary
......@@ -492,30 +497,37 @@ class App():
def _tick(self, elapsed):
self._maybe_render_status()
def _job_started(self, element, action_name):
self._status.add_job(element, action_name)
def _job_started(self, job):
self._status.add_job(job)
self._maybe_render_status()
def _job_completed(self, element, queue, action_name, success):
self._status.remove_job(element, action_name)
def _job_completed(self, job, success):
self._status.remove_job(job)
self._maybe_render_status()
# Dont attempt to handle a failure if the user has already opted to
# terminate
if not success and not self.stream.terminated:
# Get the last failure message for additional context
failure = self._fail_messages.get(element._get_unique_id())
if isinstance(job, ElementJob):
element = job.element
queue = job.queue
# Get the last failure message for additional context
failure = self._fail_messages.get(element._get_unique_id())
# XXX This is dangerous, sometimes we get the job completed *before*
# the failure message reaches us ??
if not failure:
self._status.clear()
click.echo("\n\n\nBUG: Message handling out of sync, " +
"unable to retrieve failure message for element {}\n\n\n\n\n"
.format(element), err=True)
# XXX This is dangerous, sometimes we get the job completed *before*
# the failure message reaches us ??
if not failure:
self._status.clear()
click.echo("\n\n\nBUG: Message handling out of sync, " +
"unable to retrieve failure message for element {}\n\n\n\n\n"
.format(element), err=True)
else:
self._handle_failure(element, queue, failure)
else:
self._handle_failure(element, queue, failure)
click.echo("\nTerminating all jobs\n", err=True)
self.stream.terminate()
def _handle_failure(self, element, queue, failure):
......
......@@ -217,6 +217,8 @@ def print_version(ctx, param, value):
help="Elements must be rebuilt when their dependencies have changed")
@click.option('--option', '-o', type=click.Tuple([str, str]), multiple=True, metavar='OPTION VALUE',
help="Specify a project option")
@click.option('--default-mirror', default=None,
help="The mirror to fetch from first, before attempting other mirrors")
@click.pass_context
def cli(context, **kwargs):
"""Build and manipulate BuildStream projects
......@@ -626,7 +628,7 @@ def shell(app, element, sysroot, mount, isolate, build_, command):
##################################################################
@cli.command(short_help="Checkout a built artifact")
@click.option('--force', '-f', default=False, is_flag=True,
help="Overwrite files existing in checkout directory")
help="Allow files to be overwritten")
@click.option('--deps', '-d', default='run',
type=click.Choice(['run', 'none']),
help='The dependencies to checkout (default: run)')
......@@ -634,20 +636,30 @@ def shell(app, element, sysroot, mount, isolate, build_, command):
help="Whether to run integration commands")
@click.option('--hardlinks', default=False, is_flag=True,
help="Checkout hardlinks instead of copies (handle with care)")
@click.option('--tar', default=False, is_flag=True,
help="Create a tarball from the artifact contents instead "
"of a file tree. If LOCATION is '-', the tarball "
"will be dumped to the standard output.")
@click.argument('element',
type=click.Path(readable=False))
@click.argument('directory', type=click.Path(file_okay=False))
@click.argument('location', type=click.Path())
@click.pass_obj
def checkout(app, element, directory, force, deps, integrate, hardlinks):
"""Checkout a built artifact to the specified directory
def checkout(app, element, location, force, deps, integrate, hardlinks, tar):
"""Checkout a built artifact to the specified location
"""
if hardlinks and tar:
click.echo("ERROR: options --hardlinks and --tar conflict", err=True)
sys.exit(-1)
with app.initialized():
app.stream.checkout(element,
directory=directory,
deps=deps,
location=location,
force=force,
deps=deps,
integrate=integrate,
hardlinks=hardlinks)
hardlinks=hardlinks,
tar=tar)
##################################################################
......@@ -815,4 +827,5 @@ def source_bundle(app, element, force, directory,
app.stream.source_bundle(element, directory,
track_first=track_,
force=force,
compression=compression)
compression=compression,
except_targets=except_)
......@@ -21,6 +21,7 @@ from blessings import Terminal
# Import a widget internal for formatting time codes
from .widget import TimeCode
from .._scheduler import ElementJob
# Status()
......@@ -77,9 +78,9 @@ class Status():
# element (Element): The element of the job to track
# action_name (str): The action name for this job
#
def add_job(self, element, action_name):
def add_job(self, job):
elapsed = self._stream.elapsed_time
job = _StatusJob(self._context, element, action_name, self._content_profile, self._format_profile, elapsed)
job = _StatusJob(self._context, job, self._content_profile, self._format_profile, elapsed)
self._jobs.append(job)
self._need_alloc = True
......@@ -91,7 +92,13 @@ class Status():
# element (Element): The element of the job to track
# action_name (str): The action name for this job
#
def remove_job(self, element, action_name):
def remove_job(self, job):
action_name = job.action_name
if not isinstance(job, ElementJob):
element = None
else:
element = job.element
self._jobs = [
job for job in self._jobs
if not (job.element is element and
......@@ -358,15 +365,19 @@ class _StatusHeader():
#
# Args:
# context (Context): The Context
# element (Element): The element being processed
# action_name (str): The name of the action
# job (Job): The job being processed
# content_profile (Profile): Formatting profile for content text
# format_profile (Profile): Formatting profile for formatting text
# elapsed (datetime): The offset into the session when this job is created
#
class _StatusJob():
def __init__(self, context, element, action_name, content_profile, format_profile, elapsed):
def __init__(self, context, job, content_profile, format_profile, elapsed):
action_name = job.action_name
if not isinstance(job, ElementJob):
element = None
else:
element = job.element
#
# Public members
......@@ -374,6 +385,7 @@ class _StatusJob():
self.element = element # The Element
self.action_name = action_name # The action name
self.size = None # The number of characters required to render
self.full_name = element._get_full_name() if element else action_name
#
# Private members
......@@ -386,7 +398,7 @@ class _StatusJob():
# Calculate the size needed to display
self.size = 10 # Size of time code with brackets
self.size += len(action_name)
self.size += len(element._get_full_name())
self.size += len(self.full_name)
self.size += 3 # '[' + ':' + ']'
# render()
......@@ -403,7 +415,7 @@ class _StatusJob():
self._format_profile.fmt(']')
# Add padding after the display name, before terminating ']'
name = self.element._get_full_name() + (' ' * padding)
name = self.full_name + (' ' * padding)
text += self._format_profile.fmt('[') + \
self._content_profile.fmt(self.action_name) + \
self._format_profile.fmt(':') + \
......
......@@ -415,7 +415,7 @@ class LogLine(Widget):
if "%{workspace-dirs" in format_:
workspace = element._get_workspace()
if workspace is not None:
path = workspace.path.replace(os.getenv('HOME', '/root'), '~')
path = workspace.get_absolute_path().replace(os.getenv('HOME', '/root'), '~')
line = p.fmt_subst(line, 'workspace-dirs', "Workspace: {}".format(path))
else:
line = p.fmt_subst(
......
......@@ -513,7 +513,7 @@ class Loader():
if self._fetch_subprojects:
if ticker:
ticker(filename, 'Fetching subproject from {} source'.format(source.get_kind()))
source.fetch()
source._fetch()
else:
detail = "Try fetching the project with `bst fetch {}`".format(filename)
raise LoadError(LoadErrorReason.SUBPROJECT_FETCH_NEEDED,
......
......@@ -19,7 +19,7 @@
import os
import multiprocessing # for cpu_count()
from collections import Mapping
from collections import Mapping, OrderedDict
from pluginbase import PluginBase
from . import utils
from . import _cachekey
......@@ -35,9 +35,6 @@ from ._projectrefs import ProjectRefs, ProjectRefStorage
from ._versions import BST_FORMAT_VERSION
# The separator we use for user specified aliases
_ALIAS_SEPARATOR = ':'
# Project Configuration file
_PROJECT_CONF_FILE = 'project.conf'
......@@ -70,7 +67,7 @@ class HostMount():
#
class Project():
def __init__(self, directory, context, *, junction=None, cli_options=None):
def __init__(self, directory, context, *, junction=None, cli_options=None, default_mirror=None):
# The project name
self.name = None
......@@ -94,6 +91,8 @@ class Project():
self.base_env_nocache = None # The base nocache mask (list) for the environment
self.element_overrides = {} # Element specific configurations
self.source_overrides = {} # Source specific configurations
self.mirrors = OrderedDict() # contains dicts of alias-mappings to URIs.
self.default_mirror = default_mirror # The name of the preferred mirror.
#
# Private Members
......@@ -133,8 +132,8 @@ class Project():
# fully qualified urls based on the shorthand which is allowed
# to be specified in the YAML
def translate_url(self, url):
if url and _ALIAS_SEPARATOR in url:
url_alias, url_body = url.split(_ALIAS_SEPARATOR, 1)
if url and utils._ALIAS_SEPARATOR in url:
url_alias, url_body = url.split(utils._ALIAS_SEPARATOR, 1)
alias_url = self._aliases.get(url_alias)
if alias_url:
url = alias_url + url_body
......@@ -202,6 +201,36 @@ class Project():
self._assert_plugin_format(source, version)
return source
# get_alias_uri()
#
# Returns the URI for a given alias, if it exists
#
# Args:
# alias (str): The alias.
#
# Returns:
# str: The URI for the given alias; or None: if there is no URI for
# that alias.
def get_alias_uri(self, alias):
return self._aliases.get(alias)
# get_alias_uris()
#
# Returns a list of every URI to replace an alias with
def get_alias_uris(self, alias):
if not alias or alias not in self._aliases:
return [None]
mirror_list = []
for key, alias_mapping in self.mirrors.items():
if alias in alias_mapping:
if key == self.default_mirror:
mirror_list = alias_mapping[alias] + mirror_list
else:
mirror_list += alias_mapping[alias]
mirror_list.append(self._aliases[alias])
return mirror_list
# _load():
#
# Loads the project configuration file in the project directory.
......@@ -249,7 +278,7 @@ class Project():
'aliases', 'name',
'artifacts', 'options',
'fail-on-overlap', 'shell',
'ref-storage', 'sandbox'
'ref-storage', 'sandbox', 'mirrors',
])
# The project name, element path and option declarations
......@@ -290,6 +319,10 @@ class Project():
#
self.options.process_node(config)
# Override default_mirror if not set by command-line
if not self.default_mirror:
self.default_mirror = _yaml.node_get(overrides, str, 'default-mirror', default_value=None)
#
# Now all YAML composition is done, from here on we just load
# the values from our loaded configuration dictionary.
......@@ -414,6 +447,21 @@ class Project():
self._shell_host_files.append(mount)
mirrors = _yaml.node_get(config, list, 'mirrors', default_value=[])
for mirror in mirrors:
allowed_mirror_fields = [
'name', 'aliases'
]
_yaml.node_validate(mirror, allowed_mirror_fields)
mirror_name = _yaml.node_get(mirror, str, 'name')
alias_mappings = {}
for alias_mapping, uris in _yaml.node_items(mirror['aliases']):
assert isinstance(uris, list)
alias_mappings[alias_mapping] = list(uris)
self.mirrors[mirror_name] = alias_mappings
if not self.default_mirror:
self.default_mirror = mirror_name
# _assert_plugin_format()
#
# Helper to raise a PluginError if the loaded plugin is of a lesser version then
......
......@@ -17,12 +17,13 @@
# Authors:
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
from .queue import Queue, QueueStatus, QueueType
from .queues import Queue, QueueStatus
from .fetchqueue import FetchQueue
from .trackqueue import TrackQueue
from .buildqueue import BuildQueue
from .pushqueue import PushQueue
from .pullqueue import PullQueue
from .queues.fetchqueue import FetchQueue
from .queues.trackqueue import TrackQueue
from .queues.buildqueue import BuildQueue
from .queues.pushqueue import PushQueue
from .queues.pullqueue import PullQueue
from .scheduler import Scheduler, SchedStatus
from .jobs import ElementJob
from .elementjob import ElementJob
from .cachesizejob import CacheSizeJob
from .cleanupjob import CleanupJob
# Copyright (C) 2018 Codethink Limited
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library. If not, see <http://www.gnu.org/licenses/>.
#
# Author:
# Tristan Daniël Maat <tristan.maat@codethink.co.uk>
#
from .job import Job
from ..._platform import Platform
class CacheSizeJob(Job):
def __init__(self, *args, complete_cb, **kwargs):
super().__init__(*args, **kwargs)
self._complete_cb = complete_cb
self._cache = Platform._instance.artifactcache
def child_process(self):
return self._cache.calculate_cache_size()
def parent_complete(self, success, result):
self._cache._set_cache_size(result)
if self._complete_cb:
self._complete_cb(result)
def child_process_data(self):
return {}
# Copyright (C) 2018 Codethink Limited
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library. If not, see <http://www.gnu.org/licenses/>.
#
# Author:
# Tristan Daniël Maat <tristan.maat@codethink.co.uk>
#
from .job import Job
from ..._platform import Platform
class CleanupJob(Job):
def __init__(self, *args, complete_cb, **kwargs):
super().__init__(*args, **kwargs)
self._complete_cb = complete_cb
self._cache = Platform._instance.artifactcache
def child_process(self):
return self._cache.clean()
def parent_complete(self, success, result):
self._cache._set_cache_size(result)
if self._complete_cb:
self._complete_cb()
def child_process_data(self):
return {}