Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • willsalmon/buildstream
  • CumHoleZH/buildstream
  • tchaik/buildstream
  • DCotyPortfolio/buildstream
  • jesusoctavioas/buildstream
  • patrickmmartin/buildstream
  • franred/buildstream
  • tintou/buildstream
  • alatiera/buildstream
  • martinblanchard/buildstream
  • neverdie22042524/buildstream
  • Mattlk13/buildstream
  • PServers/buildstream
  • phamnghia610909/buildstream
  • chiaratolentino/buildstream
  • eysz7-x-x/buildstream
  • kerrick1/buildstream
  • matthew-yates/buildstream
  • twofeathers/buildstream
  • mhadjimichael/buildstream
  • pointswaves/buildstream
  • Mr.JackWilson/buildstream
  • Tw3akG33k/buildstream
  • AlexFazakas/buildstream
  • eruidfkiy/buildstream
  • clamotion2/buildstream
  • nanonyme/buildstream
  • wickyjaaa/buildstream
  • nmanchev/buildstream
  • bojorquez.ja/buildstream
  • mostynb/buildstream
  • highpit74/buildstream
  • Demo112/buildstream
  • ba2014sheer/buildstream
  • tonimadrino/buildstream
  • usuario2o/buildstream
  • Angelika123456/buildstream
  • neo355/buildstream
  • corentin-ferlay/buildstream
  • coldtom/buildstream
  • wifitvbox81/buildstream
  • 358253885/buildstream
  • seanborg/buildstream
  • SotK/buildstream
  • DouglasWinship/buildstream
  • karansthr97/buildstream
  • louib/buildstream
  • bwh-ct/buildstream
  • robjh/buildstream
  • we88c0de/buildstream
  • zhengxian5555/buildstream
51 results
Show changes
Commits on Source (128)
Showing
with 784 additions and 353 deletions
......@@ -31,6 +31,7 @@ variables:
- df -h
script:
- mkdir -p "${INTEGRATION_CACHE}"
- useradd -Um buildstream
- chown -R buildstream:buildstream .
......@@ -70,6 +71,10 @@ tests-python-3.7-stretch:
# some of our base dependencies declare it as their runtime dependency.
TOXENV: py37
tests-centos-7.6:
<<: *tests
image: buildstream/testsuite-centos:7.6-5da27168-32c47d1c
overnight-fedora-28-aarch64:
image: buildstream/testsuite-fedora:aarch64-28-5da27168-32c47d1c
tags:
......@@ -185,6 +190,9 @@ docs:
- pip3 install --user -e ${BST_EXT_URL}@${BST_EXT_REF}#egg=bst_ext
- git clone https://gitlab.com/freedesktop-sdk/freedesktop-sdk.git
- git -C freedesktop-sdk checkout ${FD_SDK_REF}
artifacts:
paths:
- "${HOME}/.cache/buildstream/logs"
only:
- schedules
......
......@@ -1259,14 +1259,9 @@ into the ``setup.py``, as such, whenever the frontend command line
interface changes, the static man pages should be regenerated and
committed with that.
To do this, first ensure you have ``click_man`` installed, possibly
with::
To do this, run the following from the the toplevel directory of BuildStream::
pip3 install --user click_man
Then, in the toplevel directory of buildstream, run the following::
python3 setup.py --command-packages=click_man.commands man_pages
tox -e man
And commit the result, ensuring that you have added anything in
the ``man/`` subdirectory, which will be automatically included
......@@ -1782,7 +1777,7 @@ creating a tarball which contains everything we want it to include::
Updating BuildStream's Python dependencies
------------------------------------------
BuildStream's Python dependencies are listed in multiple
`requirements files <https://pip.readthedocs.io/en/latest/reference/pip_install/#requirements-file-format>`
`requirements files <https://pip.readthedocs.io/en/latest/reference/pip_install/#requirements-file-format>`_
present in the ``requirements`` directory.
All ``.txt`` files in this directory are generated from the corresponding
......
......@@ -2,6 +2,16 @@
buildstream 1.3.1
=================
o BREAKING CHANGE: The top level commands `checkout`, `push` and `pull` have
been moved to the `bst artifact` subcommand group and are now obsolete.
For example, you must now use `bst artifact pull hello.bst`.
The behaviour of `checkout` has changed. The previously mandatory LOCATION
argument should now be specified with the `--directory` option. In addition
to this, `--tar` is no longer a flag, it is a mutually incompatible option
to `--directory`. For example, `bst artifact checkout foo.bst --tar foo.tar.gz`.
o Added `bst artifact log` subcommand for viewing build logs.
o BREAKING CHANGE: The bst source-bundle command has been removed. The
......@@ -10,7 +20,7 @@ buildstream 1.3.1
an element's sources and generated build scripts you can do the command
`bst source-checkout --include-build-scripts --tar foo.bst some-file.tar`
o BREAKING CHANGE: `bst track` and `bst fetch` commands are now osbolete.
o BREAKING CHANGE: `bst track` and `bst fetch` commands are now obsolete.
Their functionality is provided by `bst source track` and
`bst source fetch` respectively.
......@@ -20,6 +30,10 @@ buildstream 1.3.1
specific. Recommendation if you are building in Linux is to use the
ones being used in freedesktop-sdk project, for example
o Running commands without elements specified will now attempt to use
the default targets defined in the project configuration.
If no default target is defined, all elements in the project will be used.
o All elements must now be suffixed with `.bst`
Attempting to use an element that does not have the `.bst` extension,
will result in a warning.
......@@ -36,6 +50,11 @@ buildstream 1.3.1
an error message and a hint instead, to avoid bothering folks that just
made a mistake.
o BREAKING CHANGE: The unconditional 'Are you sure?' prompts have been
removed. These would always ask you if you were sure when running
'bst workspace close --remove-dir' or 'bst workspace reset'. They got in
the way too often.
o Failed builds are included in the cache as well.
`bst checkout` will provide anything in `%{install-root}`.
A build including cached fails will cause any dependant elements
......@@ -73,12 +92,6 @@ buildstream 1.3.1
instead of just a specially-formatted build-root with a `root` and `scratch`
subdirectory.
o The buildstream.conf file learned new
'prompt.really-workspace-close-remove-dir' and
'prompt.really-workspace-reset-hard' options. These allow users to suppress
certain confirmation prompts, e.g. double-checking that the user meant to
run the command as typed.
o Due to the element `build tree` being cached in the respective artifact their
size in some cases has significantly increased. In *most* cases the build trees
are not utilised when building targets, as such by default bst 'pull' & 'build'
......
......@@ -46,6 +46,39 @@ class ArtifactCacheSpec(CASRemoteSpec):
pass
# ArtifactCacheUsage
#
# A simple object to report the current artifact cache
# usage details.
#
# Note that this uses the user configured cache quota
# rather than the internal quota with protective headroom
# removed, to provide a more sensible value to display to
# the user.
#
# Args:
# artifacts (ArtifactCache): The artifact cache to get the status of
#
class ArtifactCacheUsage():
def __init__(self, artifacts):
context = artifacts.context
self.quota_config = context.config_cache_quota # Configured quota
self.quota_size = artifacts._cache_quota_original # Resolved cache quota in bytes
self.used_size = artifacts.get_cache_size() # Size used by artifacts in bytes
self.used_percent = 0 # Percentage of the quota used
if self.quota_size is not None:
self.used_percent = int(self.used_size * 100 / self.quota_size)
# Formattable into a human readable string
#
def __str__(self):
return "{} / {} ({}%)" \
.format(utils._pretty_size(self.used_size, dec_places=1),
self.quota_config,
self.used_percent)
# An ArtifactCache manages artifacts.
#
# Args:
......@@ -64,6 +97,8 @@ class ArtifactCache():
self._required_elements = set() # The elements required for this session
self._cache_size = None # The current cache size, sometimes it's an estimate
self._cache_quota = None # The cache quota
self._cache_quota_original = None # The cache quota as specified by the user, in bytes
self._cache_quota_headroom = None # The headroom in bytes before reaching the quota or full disk
self._cache_lower_threshold = None # The target cache size for a cleanup
self._remotes_setup = False # Check to prevent double-setup of remotes
......@@ -126,7 +161,7 @@ class ArtifactCache():
self._remotes_setup = True
# Initialize remote artifact caches. We allow the commandline to override
# the user config in some cases (for example `bst push --remote=...`).
# the user config in some cases (for example `bst artifact push --remote=...`).
has_remote_caches = False
if remote_url:
self._set_remotes([ArtifactCacheSpec(remote_url, push=True)])
......@@ -216,11 +251,33 @@ class ArtifactCache():
#
# Clean the artifact cache as much as possible.
#
# Args:
# progress (callable): A callback to call when a ref is removed
#
# Returns:
# (int): The size of the cache after having cleaned up
#
def clean(self):
def clean(self, progress=None):
artifacts = self.list_artifacts()
context = self.context
# Some accumulative statistics
removed_ref_count = 0
space_saved = 0
# Start off with an announcement with as much info as possible
volume_size, volume_avail = self._get_cache_volume_size()
self._message(MessageType.STATUS, "Starting cache cleanup",
detail=("Elements required by the current build plan: {}\n" +
"User specified quota: {} ({})\n" +
"Cache usage: {}\n" +
"Cache volume: {} total, {} available")
.format(len(self._required_elements),
context.config_cache_quota,
utils._pretty_size(self._cache_quota_original, dec_places=2),
utils._pretty_size(self.get_cache_size(), dec_places=2),
utils._pretty_size(volume_size, dec_places=2),
utils._pretty_size(volume_avail, dec_places=2)))
# Build a set of the cache keys which are required
# based on the required elements at cleanup time
......@@ -245,13 +302,20 @@ class ArtifactCache():
# can't remove them, we have to abort the build.
#
# FIXME: Asking the user what to do may be neater
#
default_conf = os.path.join(os.environ['XDG_CONFIG_HOME'],
'buildstream.conf')
detail = ("There is not enough space to complete the build.\n"
"Please increase the cache-quota in {}."
.format(self.context.config_origin or default_conf))
if self.has_quota_exceeded():
detail = ("Aborted after removing {} refs and saving {} disk space.\n"
"The remaining {} in the cache is required by the {} elements in your build plan\n\n"
"There is not enough space to complete the build.\n"
"Please increase the cache-quota in {} and/or make more disk space."
.format(removed_ref_count,
utils._pretty_size(space_saved, dec_places=2),
utils._pretty_size(self.get_cache_size(), dec_places=2),
len(self._required_elements),
(context.config_origin or default_conf)))
if self.full():
raise ArtifactError("Cache too full. Aborting.",
detail=detail,
reason="cache-too-full")
......@@ -264,10 +328,33 @@ class ArtifactCache():
# Remove the actual artifact, if it's not required.
size = self.remove(to_remove)
removed_ref_count += 1
space_saved += size
self._message(MessageType.STATUS,
"Freed {: <7} {}".format(
utils._pretty_size(size, dec_places=2),
to_remove))
# Remove the size from the removed size
self.set_cache_size(self._cache_size - size)
# This should be O(1) if implemented correctly
# User callback
#
# Currently this process is fairly slow, but we should
# think about throttling this progress() callback if this
# becomes too intense.
if progress:
progress()
# Informational message about the side effects of the cleanup
self._message(MessageType.INFO, "Cleanup completed",
detail=("Removed {} refs and saving {} disk space.\n" +
"Cache usage is now: {}")
.format(removed_ref_count,
utils._pretty_size(space_saved, dec_places=2),
utils._pretty_size(self.get_cache_size(), dec_places=2)))
return self.get_cache_size()
# compute_cache_size()
......@@ -279,7 +366,14 @@ class ArtifactCache():
# (int): The size of the artifact cache.
#
def compute_cache_size(self):
self._cache_size = self.cas.calculate_cache_size()
old_cache_size = self._cache_size
new_cache_size = self.cas.calculate_cache_size()
if old_cache_size != new_cache_size:
self._cache_size = new_cache_size
usage = ArtifactCacheUsage(self)
self._message(MessageType.STATUS, "Cache usage recomputed: {}".format(usage))
return self._cache_size
......@@ -307,7 +401,7 @@ class ArtifactCache():
# it is greater than the actual cache size.
#
# Returns:
# (int) An approximation of the artifact cache size.
# (int) An approximation of the artifact cache size, in bytes.
#
def get_cache_size(self):
......@@ -338,15 +432,25 @@ class ArtifactCache():
self._cache_size = cache_size
self._write_cache_size(self._cache_size)
# has_quota_exceeded()
# full()
#
# Checks if the current artifact cache size exceeds the quota.
# Checks if the artifact cache is full, either
# because the user configured quota has been exceeded
# or because the underlying disk is almost full.
#
# Returns:
# (bool): True of the quota is exceeded
# (bool): True if the artifact cache is full
#
def has_quota_exceeded(self):
return self.get_cache_size() > self._cache_quota
def full(self):
if self.get_cache_size() > self._cache_quota:
return True
_, volume_avail = self._get_cache_volume_size()
if volume_avail < self._cache_quota_headroom:
return True
return False
# preflight():
#
......@@ -459,8 +563,7 @@ class ArtifactCache():
# `ArtifactCache.get_artifact_fullname`)
#
# Returns:
# (int|None) The amount of space pruned from the repository in
# Bytes, or None if defer_prune is True
# (int): The amount of space recovered in the cache, in bytes
#
def remove(self, ref):
......@@ -844,23 +947,20 @@ class ArtifactCache():
# is taken from the user requested cache_quota.
#
if 'BST_TEST_SUITE' in os.environ:
headroom = 0
self._cache_quota_headroom = 0
else:
headroom = 2e9
artifactdir_volume = self.context.artifactdir
while not os.path.exists(artifactdir_volume):
artifactdir_volume = os.path.dirname(artifactdir_volume)
self._cache_quota_headroom = 2e9
try:
cache_quota = utils._parse_size(self.context.config_cache_quota, artifactdir_volume)
cache_quota = utils._parse_size(self.context.config_cache_quota,
self.context.artifactdir)
except utils.UtilError as e:
raise LoadError(LoadErrorReason.INVALID_DATA,
"{}\nPlease specify the value in bytes or as a % of full disk space.\n"
"\nValid values are, for example: 800M 10G 1T 50%\n"
.format(str(e))) from e
available_space, total_size = self._get_volume_space_info_for(artifactdir_volume)
total_size, available_space = self._get_cache_volume_size()
cache_size = self.get_cache_size()
# Ensure system has enough storage for the cache_quota
......@@ -871,27 +971,39 @@ class ArtifactCache():
#
if cache_quota is None: # Infinity, set to max system storage
cache_quota = cache_size + available_space
if cache_quota < headroom: # Check minimum
if cache_quota < self._cache_quota_headroom: # Check minimum
raise LoadError(LoadErrorReason.INVALID_DATA,
"Invalid cache quota ({}): ".format(utils._pretty_size(cache_quota)) +
"BuildStream requires a minimum cache quota of 2G.")
elif cache_quota > cache_size + available_space: # Check maximum
elif cache_quota > total_size:
# A quota greater than the total disk size is certianly an error
raise ArtifactError("Your system does not have enough available " +
"space to support the cache quota specified.",
detail=("You have specified a quota of {quota} total disk space.\n" +
"The filesystem containing {local_cache_path} only " +
"has {total_size} total disk space.")
.format(
quota=self.context.config_cache_quota,
local_cache_path=self.context.artifactdir,
total_size=utils._pretty_size(total_size)),
reason='insufficient-storage-for-quota')
elif cache_quota > cache_size + available_space:
# The quota does not fit in the available space, this is a warning
if '%' in self.context.config_cache_quota:
available = (available_space / total_size) * 100
available = '{}% of total disk space'.format(round(available, 1))
else:
available = utils._pretty_size(available_space)
raise LoadError(LoadErrorReason.INVALID_DATA,
("Your system does not have enough available " +
"space to support the cache quota specified.\n" +
"\nYou have specified a quota of {quota} total disk space.\n" +
"- The filesystem containing {local_cache_path} only " +
"has: {available_size} available.")
.format(
quota=self.context.config_cache_quota,
local_cache_path=self.context.artifactdir,
available_size=available))
self._message(MessageType.WARN,
"Your system does not have enough available " +
"space to support the cache quota specified.",
detail=("You have specified a quota of {quota} total disk space.\n" +
"The filesystem containing {local_cache_path} only " +
"has {available_size} available.")
.format(quota=self.context.config_cache_quota,
local_cache_path=self.context.artifactdir,
available_size=available))
# Place a slight headroom (2e9 (2GB) on the cache_quota) into
# cache_quota to try and avoid exceptions.
......@@ -900,22 +1012,25 @@ class ArtifactCache():
# if we end up writing more than 2G, but hey, this stuff is
# already really fuzzy.
#
self._cache_quota = cache_quota - headroom
self._cache_quota_original = cache_quota
self._cache_quota = cache_quota - self._cache_quota_headroom
self._cache_lower_threshold = self._cache_quota / 2
# _get_volume_space_info_for
# _get_cache_volume_size()
#
# Get the available space and total space for the given volume
#
# Args:
# volume: volume for which to get the size
# Get the available space and total space for the volume on
# which the artifact cache is located.
#
# Returns:
# A tuple containing first the availabe number of bytes on the requested
# volume, then the total number of bytes of the volume.
def _get_volume_space_info_for(self, volume):
stat = os.statvfs(volume)
return stat.f_bsize * stat.f_bavail, stat.f_bsize * stat.f_blocks
# (int): The total number of bytes on the volume
# (int): The number of available bytes on the volume
#
# NOTE: We use this stub to allow the test cases
# to override what an artifact cache thinks
# about it's disk size and available bytes.
#
def _get_cache_volume_size(self):
return utils._get_volume_size(self.context.artifactdir)
# _configured_remote_artifact_cache_specs():
......
......@@ -21,7 +21,7 @@ import hashlib
import itertools
import os
import stat
import tempfile
import errno
import uuid
import contextlib
......@@ -129,7 +129,7 @@ class CASCache():
else:
return dest
with tempfile.TemporaryDirectory(prefix='tmp', dir=self.tmpdir) as tmpdir:
with utils._tempdir(prefix='tmp', dir=self.tmpdir) as tmpdir:
checkoutdir = os.path.join(tmpdir, ref)
self._checkout(checkoutdir, tree)
......@@ -374,7 +374,7 @@ class CASCache():
for chunk in iter(lambda: tmp.read(4096), b""):
h.update(chunk)
else:
tmp = stack.enter_context(tempfile.NamedTemporaryFile(dir=self.tmpdir))
tmp = stack.enter_context(utils._tempnamedfile(dir=self.tmpdir))
# Set mode bits to 0644
os.chmod(tmp.name, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH)
......@@ -545,11 +545,7 @@ class CASCache():
def remove(self, ref, *, defer_prune=False):
# Remove cache ref
refpath = self._refpath(ref)
if not os.path.exists(refpath):
raise CASCacheError("Could not find ref '{}'".format(ref))
os.unlink(refpath)
self._remove_ref(ref)
if not defer_prune:
pruned = self.prune()
......@@ -626,6 +622,55 @@ class CASCache():
def _refpath(self, ref):
return os.path.join(self.casdir, 'refs', 'heads', ref)
# _remove_ref()
#
# Removes a ref.
#
# This also takes care of pruning away directories which can
# be removed after having removed the given ref.
#
# Args:
# ref (str): The ref to remove
#
# Raises:
# (CASCacheError): If the ref didnt exist, or a system error
# occurred while removing it
#
def _remove_ref(self, ref):
# Remove the ref itself
refpath = self._refpath(ref)
try:
os.unlink(refpath)
except FileNotFoundError as e:
raise CASCacheError("Could not find ref '{}'".format(ref)) from e
# Now remove any leading directories
basedir = os.path.join(self.casdir, 'refs', 'heads')
components = list(os.path.split(ref))
while components:
components.pop()
refdir = os.path.join(basedir, *components)
# Break out once we reach the base
if refdir == basedir:
break
try:
os.rmdir(refdir)
except FileNotFoundError:
# The parent directory did not exist, but it's
# parent directory might still be ready to prune
pass
except OSError as e:
if e.errno == errno.ENOTEMPTY:
# The parent directory was not empty, so we
# cannot prune directories beyond this point
break
# Something went wrong here
raise CASCacheError("System error while removing ref '{}': {}".format(ref, e)) from e
# _commit_directory():
#
# Adds local directory to content addressable store.
......@@ -797,7 +842,7 @@ class CASCache():
# already in local repository
return objpath
with tempfile.NamedTemporaryFile(dir=self.tmpdir) as f:
with utils._tempnamedfile(dir=self.tmpdir) as f:
remote._fetch_blob(digest, f)
added_digest = self.add_object(path=f.name, link_directly=True)
......@@ -807,7 +852,7 @@ class CASCache():
def _batch_download_complete(self, batch):
for digest, data in batch.send():
with tempfile.NamedTemporaryFile(dir=self.tmpdir) as f:
with utils._tempnamedfile(dir=self.tmpdir) as f:
f.write(data)
f.flush()
......@@ -904,7 +949,7 @@ class CASCache():
def _fetch_tree(self, remote, digest):
# download but do not store the Tree object
with tempfile.NamedTemporaryFile(dir=self.tmpdir) as out:
with utils._tempnamedfile(dir=self.tmpdir) as out:
remote._fetch_blob(digest, out)
tree = remote_execution_pb2.Tree()
......
......@@ -324,7 +324,7 @@ class _ContentAddressableStorageServicer(remote_execution_pb2_grpc.ContentAddres
blob_response.digest.size_bytes = digest.size_bytes
if len(blob_request.data) != digest.size_bytes:
blob_response.status.code = grpc.StatusCode.FAILED_PRECONDITION
blob_response.status.code = code_pb2.FAILED_PRECONDITION
continue
try:
......@@ -335,10 +335,10 @@ class _ContentAddressableStorageServicer(remote_execution_pb2_grpc.ContentAddres
out.flush()
server_digest = self.cas.add_object(path=out.name)
if server_digest.hash != digest.hash:
blob_response.status.code = grpc.StatusCode.FAILED_PRECONDITION
blob_response.status.code = code_pb2.FAILED_PRECONDITION
except ArtifactTooLargeException:
blob_response.status.code = grpc.StatusCode.RESOURCE_EXHAUSTED
blob_response.status.code = code_pb2.RESOURCE_EXHAUSTED
return response
......
......@@ -30,9 +30,9 @@ from . import _yaml
from ._exceptions import LoadError, LoadErrorReason, BstError
from ._message import Message, MessageType
from ._profile import Topics, profile_start, profile_end
from ._artifactcache import ArtifactCache
from ._artifactcache import ArtifactCache, ArtifactCacheUsage
from ._cas import CASCache
from ._workspaces import Workspaces, WorkspaceProjectCache, WORKSPACE_PROJECT_FILE
from ._workspaces import Workspaces, WorkspaceProjectCache
from .plugin import _plugin_lookup
from .sandbox import SandboxRemote
......@@ -121,18 +121,10 @@ class Context():
# Whether or not to attempt to pull build trees globally
self.pull_buildtrees = None
# Boolean, whether we double-check with the user that they meant to
# remove a workspace directory.
self.prompt_workspace_close_remove_dir = None
# Boolean, whether we double-check with the user that they meant to
# close the workspace when they're using it to access the project.
self.prompt_workspace_close_project_inaccessible = None
# Boolean, whether we double-check with the user that they meant to do
# a hard reset of a workspace, potentially losing changes.
self.prompt_workspace_reset_hard = None
# Whether elements must be rebuilt when their dependencies have changed
self._strict_build_plan = None
......@@ -260,16 +252,10 @@ class Context():
prompt = _yaml.node_get(
defaults, Mapping, 'prompt')
_yaml.node_validate(prompt, [
'really-workspace-close-remove-dir',
'really-workspace-close-project-inaccessible',
'really-workspace-reset-hard',
])
self.prompt_workspace_close_remove_dir = _node_get_option_str(
prompt, 'really-workspace-close-remove-dir', ['ask', 'yes']) == 'ask'
self.prompt_workspace_close_project_inaccessible = _node_get_option_str(
prompt, 'really-workspace-close-project-inaccessible', ['ask', 'yes']) == 'ask'
self.prompt_workspace_reset_hard = _node_get_option_str(
prompt, 'really-workspace-reset-hard', ['ask', 'yes']) == 'ask'
# Load per-projects overrides
self._project_overrides = _yaml.node_get(defaults, Mapping, 'projects', default_value={})
......@@ -289,6 +275,16 @@ class Context():
return self._artifactcache
# get_artifact_cache_usage()
#
# Fetches the current usage of the artifact cache
#
# Returns:
# (ArtifactCacheUsage): The current status
#
def get_artifact_cache_usage(self):
return ArtifactCacheUsage(self.artifactcache)
# add_project():
#
# Add a project to the context.
......@@ -657,20 +653,6 @@ class Context():
self._cascache = CASCache(self.artifactdir)
return self._cascache
# guess_element()
#
# Attempts to interpret which element the user intended to run commands on
#
# Returns:
# (str) The name of the element, or None if no element can be guessed
def guess_element(self):
workspace_project_dir, _ = utils._search_upward_for_files(self._directory, [WORKSPACE_PROJECT_FILE])
if workspace_project_dir:
workspace_project = self._workspace_project_cache.get(workspace_project_dir)
return workspace_project.get_default_element()
else:
return None
# _node_get_option_str()
#
......
......@@ -194,11 +194,6 @@ class App():
except BstError as e:
self._error_exit(e, "Error instantiating platform")
try:
self.context.artifactcache.preflight()
except BstError as e:
self._error_exit(e, "Error instantiating artifact cache")
# Create the logger right before setting the message handler
self.logger = LogLine(self.context,
self._content_profile,
......@@ -211,6 +206,13 @@ class App():
# Propagate pipeline feedback to the user
self.context.set_message_handler(self._message_handler)
# Preflight the artifact cache after initializing logging,
# this can cause messages to be emitted.
try:
self.context.artifactcache.preflight()
except BstError as e:
self._error_exit(e, "Error instantiating artifact cache")
#
# Load the Project
#
......
This diff is collapsed.
......@@ -353,13 +353,17 @@ class _StatusHeader():
def render(self, line_length, elapsed):
project = self._context.get_toplevel_project()
line_length = max(line_length, 80)
size = 0
text = ''
#
# Line 1: Session time, project name, session / total elements
#
# ========= 00:00:00 project-name (143/387) =========
#
session = str(len(self._stream.session_elements))
total = str(len(self._stream.total_elements))
# Format and calculate size for target and overall time code
size = 0
text = ''
size += len(total) + len(session) + 4 # Size for (N/N) with a leading space
size += 8 # Size of time code
size += len(project.name) + 1
......@@ -372,6 +376,12 @@ class _StatusHeader():
self._format_profile.fmt(')')
line1 = self._centered(text, size, line_length, '=')
#
# Line 2: Dynamic list of queue status reports
#
# (Fetched:0 117 0)→ (Built:4 0 0)
#
size = 0
text = ''
......@@ -389,10 +399,28 @@ class _StatusHeader():
line2 = self._centered(text, size, line_length, ' ')
size = 24
text = self._format_profile.fmt("~~~~~ ") + \
self._content_profile.fmt('Active Tasks') + \
self._format_profile.fmt(" ~~~~~")
#
# Line 3: Cache usage percentage report
#
# ~~~~~~ cache: 69% ~~~~~~
#
usage = self._context.get_artifact_cache_usage()
usage_percent = '{}%'.format(usage.used_percent)
size = 21
size += len(usage_percent)
if usage.used_percent >= 95:
formatted_usage_percent = self._error_profile.fmt(usage_percent)
elif usage.used_percent >= 80:
formatted_usage_percent = self._content_profile.fmt(usage_percent)
else:
formatted_usage_percent = self._success_profile.fmt(usage_percent)
text = self._format_profile.fmt("~~~~~~ ") + \
self._content_profile.fmt('cache') + \
self._format_profile.fmt(': ') + \
formatted_usage_percent + \
self._format_profile.fmt(' ~~~~~~')
line3 = self._centered(text, size, line_length, ' ')
return line1 + '\n' + line2 + '\n' + line3
......
......@@ -175,29 +175,22 @@ class TypeName(Widget):
# A widget for displaying the Element name
class ElementName(Widget):
def __init__(self, context, content_profile, format_profile):
super(ElementName, self).__init__(context, content_profile, format_profile)
# Pre initialization format string, before we know the length of
# element names in the pipeline
self._fmt_string = '{: <30}'
def render(self, message):
action_name = message.action_name
element_id = message.task_id or message.unique_id
if element_id is None:
return ""
plugin = _plugin_lookup(element_id)
name = plugin._get_full_name()
if element_id is not None:
plugin = _plugin_lookup(element_id)
name = plugin._get_full_name()
name = '{: <30}'.format(name)
else:
name = 'core activity'
name = '{: <30}'.format(name)
# Sneak the action name in with the element name
action_name = message.action_name
if not action_name:
action_name = "Main"
return self.content_profile.fmt("{: >5}".format(action_name.lower())) + \
self.format_profile.fmt(':') + \
self.content_profile.fmt(self._fmt_string.format(name))
self.format_profile.fmt(':') + self.content_profile.fmt(name)
# A widget for displaying the primary message text
......@@ -219,9 +212,12 @@ class CacheKey(Widget):
def render(self, message):
element_id = message.task_id or message.unique_id
if element_id is None or not self._key_length:
if not self._key_length:
return ""
if element_id is None:
return ' ' * self._key_length
missing = False
key = ' ' * self._key_length
plugin = _plugin_lookup(element_id)
......@@ -456,6 +452,7 @@ class LogLine(Widget):
values["Session Start"] = starttime.strftime('%A, %d-%m-%Y at %H:%M:%S')
values["Project"] = "{} ({})".format(project.name, project.directory)
values["Targets"] = ", ".join([t.name for t in stream.targets])
values["Cache Usage"] = "{}".format(context.get_artifact_cache_usage())
text += self._format_values(values)
# User configurations
......@@ -647,8 +644,9 @@ class LogLine(Widget):
abbrev = False
if message.message_type not in ERROR_MESSAGES \
and not frontend_message and n_lines > self._message_lines:
abbrev = True
lines = lines[0:self._message_lines]
if self._message_lines > 0:
abbrev = True
else:
lines[n_lines - 1] = lines[n_lines - 1].rstrip('\n')
......@@ -674,7 +672,7 @@ class LogLine(Widget):
if self.context is not None and not self.context.log_verbose:
text += self._indent + self._err_profile.fmt("Log file: ")
text += self._indent + self._logfile_widget.render(message) + '\n'
else:
elif self._log_lines > 0:
text += self._indent + self._err_profile.fmt("Printing the last {} lines from log file:"
.format(self._log_lines)) + '\n'
text += self._indent + self._logfile_widget.render(message, abbrev=False) + '\n'
......
......@@ -112,7 +112,8 @@ class GitMirror(SourceFetcher):
else:
remote_name = "origin"
self.source.call([self.source.host_git, 'fetch', remote_name, '--prune', '--force', '--tags'],
self.source.call([self.source.host_git, 'fetch', remote_name, '--prune',
'+refs/heads/*:refs/heads/*', '+refs/tags/*:refs/tags/*'],
fail="Failed to fetch from remote git repository: {}".format(url),
fail_temporarily=True,
cwd=self.mirror)
......@@ -604,7 +605,7 @@ class _GitSourceBase(Source):
detail = "The ref provided for the element does not exist locally " + \
"in the provided track branch / tag '{}'.\n".format(self.tracking) + \
"You may wish to track the element to update the ref from '{}' ".format(self.tracking) + \
"with `bst track`,\n" + \
"with `bst source track`,\n" + \
"or examine the upstream at '{}' for the specific ref.".format(self.mirror.url)
self.warn("{}: expected ref '{}' was not found in given track '{}' for staged repository: '{}'\n"
......
......@@ -557,7 +557,7 @@ class Loader():
ticker(filename, 'Fetching subproject from {} source'.format(source.get_kind()))
source._fetch(sources[0:idx])
else:
detail = "Try fetching the project with `bst fetch {}`".format(filename)
detail = "Try fetching the project with `bst source fetch {}`".format(filename)
raise LoadError(LoadErrorReason.SUBPROJECT_FETCH_NEEDED,
"Subproject fetch needed for junction: {}".format(filename),
detail=detail)
......@@ -565,7 +565,7 @@ class Loader():
# Handle the case where a subproject has no ref
#
elif source.get_consistency() == Consistency.INCONSISTENT:
detail = "Try tracking the junction element with `bst track {}`".format(filename)
detail = "Try tracking the junction element with `bst source track {}`".format(filename)
raise LoadError(LoadErrorReason.SUBPROJECT_INCONSISTENT,
"Subproject has no ref for junction: {}".format(filename),
detail=detail)
......
......@@ -373,7 +373,7 @@ class Pipeline():
if source._get_consistency() == Consistency.INCONSISTENT:
detail += " {} is missing ref\n".format(source)
detail += '\n'
detail += "Try tracking these elements first with `bst track`\n"
detail += "Try tracking these elements first with `bst source track`\n"
raise PipelineError("Inconsistent pipeline", detail=detail, reason="inconsistent-pipeline")
......@@ -406,7 +406,7 @@ class Pipeline():
if source._get_consistency() != Consistency.CACHED:
detail += " {}\n".format(source)
detail += '\n'
detail += "Try fetching these elements first with `bst fetch`,\n" + \
detail += "Try fetching these elements first with `bst source fetch`,\n" + \
"or run this command with `--fetch` option\n"
raise PipelineError("Uncached sources", detail=detail, reason="uncached-sources")
......
#
# Copyright (C) 2017 Codethink Limited
# Copyright (C) 2019 Bloomberg Finance LP
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
......@@ -16,6 +17,7 @@
#
# Authors:
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
# James Ennis <james.ennis@codethink.co.uk>
import cProfile
import pstats
......@@ -46,6 +48,8 @@ class Topics():
LOAD_CONTEXT = 'load-context'
LOAD_PROJECT = 'load-project'
LOAD_PIPELINE = 'load-pipeline'
LOAD_SELECTION = 'load-selection'
SCHEDULER = 'scheduler'
SHOW = 'show'
ARTIFACT_RECEIVE = 'artifact-receive'
ALL = 'all'
......@@ -62,15 +66,24 @@ class Profile():
def end(self):
self.profiler.disable()
dt = datetime.datetime.fromtimestamp(self.start)
timestamp = dt.strftime('%Y%m%dT%H%M%S')
filename = self.key.replace('/', '-')
filename = filename.replace('.', '-')
filename = os.path.join(os.getcwd(), 'profile-' + filename + '.log')
filename = os.path.join(os.getcwd(), 'profile-' + timestamp + '-' + filename)
with open(filename, "a", encoding="utf-8") as f:
time_ = dt.strftime('%Y-%m-%d %H:%M:%S') # Human friendly format
self.__write_log(filename + '.log', time_)
self.__write_binary(filename + '.cprofile')
dt = datetime.datetime.fromtimestamp(self.start)
time_ = dt.strftime('%Y-%m-%d %H:%M:%S')
########################################
# Private Methods #
########################################
def __write_log(self, filename, time_):
with open(filename, "a", encoding="utf-8") as f:
heading = '================================================================\n'
heading += 'Profile for key: {}\n'.format(self.key)
heading += 'Started at: {}\n'.format(time_)
......@@ -81,6 +94,9 @@ class Profile():
ps = pstats.Stats(self.profiler, stream=f).sort_stats('cumulative')
ps.print_stats()
def __write_binary(self, filename):
self.profiler.dump_stats(filename)
# profile_start()
#
......
......@@ -104,6 +104,9 @@ class Project():
# Absolute path to where elements are loaded from within the project
self.element_path = None
# Default target elements
self._default_targets = None
# ProjectRefs for the main refs and also for junctions
self.refs = ProjectRefs(self.directory, 'project.refs')
self.junction_refs = ProjectRefs(self.directory, 'junction.refs')
......@@ -228,7 +231,7 @@ class Project():
'element-path', 'variables',
'environment', 'environment-nocache',
'split-rules', 'elements', 'plugins',
'aliases', 'name',
'aliases', 'name', 'defaults',
'artifacts', 'options',
'fail-on-overlap', 'shell', 'fatal-warnings',
'ref-storage', 'sandbox', 'mirrors', 'remote-execution',
......@@ -391,6 +394,44 @@ class Project():
# Reset the element loader state
Element._reset_load_state()
# get_default_target()
#
# Attempts to interpret which element the user intended to run a command on.
# This is for commands that only accept a single target element and thus,
# this only uses the workspace element (if invoked from workspace directory)
# and does not use the project default targets.
#
def get_default_target(self):
return self._invoked_from_workspace_element
# get_default_targets()
#
# Attempts to interpret which elements the user intended to run a command on.
# This is for commands that accept multiple target elements.
#
def get_default_targets(self):
# If _invoked_from_workspace_element has a value,
# a workspace element was found before a project config
# Therefore the workspace does not contain a project
if self._invoked_from_workspace_element:
return (self._invoked_from_workspace_element,)
# Default targets from project configuration
if self._default_targets:
return tuple(self._default_targets)
# If default targets are not configured, default to all project elements
default_targets = []
for root, _, files in os.walk(self.element_path):
for file in files:
if file.endswith(".bst"):
rel_dir = os.path.relpath(root, self.element_path)
rel_file = os.path.join(rel_dir, file).lstrip("./")
default_targets.append(rel_file)
return tuple(default_targets)
# _load():
#
# Loads the project configuration file in the project
......@@ -456,6 +497,10 @@ class Project():
self.config.options = OptionPool(self.element_path)
self.first_pass_config.options = OptionPool(self.element_path)
defaults = _yaml.node_get(pre_config_node, Mapping, 'defaults')
_yaml.node_validate(defaults, ['targets'])
self._default_targets = _yaml.node_get(defaults, list, "targets")
# Fatal warnings
self._fatal_warnings = _yaml.node_get(pre_config_node, list, 'fatal-warnings', default_value=[])
......
......@@ -28,7 +28,20 @@ class CleanupJob(Job):
self._artifacts = context.artifactcache
def child_process(self):
return self._artifacts.clean()
def progress():
self.send_message('update-cache-size',
self._artifacts.get_cache_size())
return self._artifacts.clean(progress)
def handle_message(self, message_type, message):
# Update the cache size in the main process as we go,
# this provides better feedback in the UI.
if message_type == 'update-cache-size':
self._artifacts.set_cache_size(message)
return True
return False
def parent_complete(self, status, result):
if status == JobStatus.OK:
......
......@@ -58,10 +58,10 @@ class JobStatus():
# Used to distinguish between status messages and return values
class Envelope():
class _Envelope():
def __init__(self, message_type, message):
self._message_type = message_type
self._message = message
self.message_type = message_type
self.message = message
# Process class that doesn't call waitpid on its own.
......@@ -184,6 +184,16 @@ class Job():
self._terminated = True
# get_terminated()
#
# Check if a job has been terminated.
#
# Returns:
# (bool): True in the main process if Job.terminate() was called.
#
def get_terminated(self):
return self._terminated
# terminate_wait()
#
# Wait for terminated jobs to complete
......@@ -265,10 +275,37 @@ class Job():
def set_task_id(self, task_id):
self._task_id = task_id
# send_message()
#
# To be called from inside Job.child_process() implementations
# to send messages to the main process during processing.
#
# These messages will be processed by the class's Job.handle_message()
# implementation.
#
def send_message(self, message_type, message):
self._queue.put(_Envelope(message_type, message))
#######################################################
# Abstract Methods #
#######################################################
# handle_message()
#
# Handle a custom message. This will be called in the main process in
# response to any messages sent to the main proces using the
# Job.send_message() API from inside a Job.child_process() implementation
#
# Args:
# message_type (str): A string to identify the message type
# message (any): A simple serializable object
#
# Returns:
# (bool): Should return a truthy value if message_type is handled.
#
def handle_message(self, message_type, message):
return False
# parent_complete()
#
# This will be executed after the job finishes, and is expected to
......@@ -406,7 +443,7 @@ class Job():
elapsed=elapsed, detail=e.detail,
logfile=filename, sandbox=e.sandbox)
self._queue.put(Envelope('child_data', self.child_process_data()))
self._queue.put(_Envelope('child_data', self.child_process_data()))
# Report the exception to the parent (for internal testing purposes)
self._child_send_error(e)
......@@ -432,7 +469,7 @@ class Job():
else:
# No exception occurred in the action
self._queue.put(Envelope('child_data', self.child_process_data()))
self._queue.put(_Envelope('child_data', self.child_process_data()))
self._child_send_result(result)
elapsed = datetime.datetime.now() - starttime
......@@ -459,7 +496,7 @@ class Job():
domain = e.domain
reason = e.reason
envelope = Envelope('error', {
envelope = _Envelope('error', {
'domain': domain,
'reason': reason
})
......@@ -477,7 +514,7 @@ class Job():
#
def _child_send_result(self, result):
if result is not None:
envelope = Envelope('result', result)
envelope = _Envelope('result', result)
self._queue.put(envelope)
# _child_shutdown()
......@@ -514,7 +551,7 @@ class Job():
if message.message_type == MessageType.LOG:
return
self._queue.put(Envelope('message', message))
self._queue.put(_Envelope('message', message))
# _parent_shutdown()
#
......@@ -578,24 +615,28 @@ class Job():
if not self._listening:
return
if envelope._message_type == 'message':
if envelope.message_type == 'message':
# Propagate received messages from children
# back through the context.
self._scheduler.context.message(envelope._message)
elif envelope._message_type == 'error':
self._scheduler.context.message(envelope.message)
elif envelope.message_type == 'error':
# For regression tests only, save the last error domain / reason
# reported from a child task in the main process, this global state
# is currently managed in _exceptions.py
set_last_task_error(envelope._message['domain'],
envelope._message['reason'])
elif envelope._message_type == 'result':
set_last_task_error(envelope.message['domain'],
envelope.message['reason'])
elif envelope.message_type == 'result':
assert self._result is None
self._result = envelope._message
elif envelope._message_type == 'child_data':
self._result = envelope.message
elif envelope.message_type == 'child_data':
# If we retry a job, we assign a new value to this
self.child_data = envelope._message
else:
raise Exception()
self.child_data = envelope.message
# Try Job subclass specific messages now
elif not self.handle_message(envelope.message_type,
envelope.message):
assert 0, "Unhandled message type '{}': {}" \
.format(envelope.message_type, envelope.message)
# _parent_process_queue()
#
......
......@@ -70,9 +70,6 @@ class BuildQueue(Queue):
return element._assemble()
def status(self, element):
# state of dependencies may have changed, recalculate element state
element._update_state()
if not element._is_required():
# Artifact is not currently required but it may be requested later.
# Keep it in the queue.
......@@ -100,7 +97,7 @@ class BuildQueue(Queue):
# If the estimated size outgrows the quota, ask the scheduler
# to queue a job to actually check the real cache size.
#
if artifacts.has_quota_exceeded():
if artifacts.full():
self._scheduler.check_cache_size()
def done(self, job, element, result, status):
......
......@@ -44,9 +44,6 @@ class FetchQueue(Queue):
element._fetch()
def status(self, element):
# state of dependencies may have changed, recalculate element state
element._update_state()
if not element._is_required():
# Artifact is not currently required but it may be requested later.
# Keep it in the queue.
......