Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • willsalmon/buildstream
  • CumHoleZH/buildstream
  • tchaik/buildstream
  • DCotyPortfolio/buildstream
  • jesusoctavioas/buildstream
  • patrickmmartin/buildstream
  • franred/buildstream
  • tintou/buildstream
  • alatiera/buildstream
  • martinblanchard/buildstream
  • neverdie22042524/buildstream
  • Mattlk13/buildstream
  • PServers/buildstream
  • phamnghia610909/buildstream
  • chiaratolentino/buildstream
  • eysz7-x-x/buildstream
  • kerrick1/buildstream
  • matthew-yates/buildstream
  • twofeathers/buildstream
  • mhadjimichael/buildstream
  • pointswaves/buildstream
  • Mr.JackWilson/buildstream
  • Tw3akG33k/buildstream
  • AlexFazakas/buildstream
  • eruidfkiy/buildstream
  • clamotion2/buildstream
  • nanonyme/buildstream
  • wickyjaaa/buildstream
  • nmanchev/buildstream
  • bojorquez.ja/buildstream
  • mostynb/buildstream
  • highpit74/buildstream
  • Demo112/buildstream
  • ba2014sheer/buildstream
  • tonimadrino/buildstream
  • usuario2o/buildstream
  • Angelika123456/buildstream
  • neo355/buildstream
  • corentin-ferlay/buildstream
  • coldtom/buildstream
  • wifitvbox81/buildstream
  • 358253885/buildstream
  • seanborg/buildstream
  • SotK/buildstream
  • DouglasWinship/buildstream
  • karansthr97/buildstream
  • louib/buildstream
  • bwh-ct/buildstream
  • robjh/buildstream
  • we88c0de/buildstream
  • zhengxian5555/buildstream
51 results
Show changes
Commits on Source (33)
Showing
with 171 additions and 116 deletions
...@@ -1259,14 +1259,9 @@ into the ``setup.py``, as such, whenever the frontend command line ...@@ -1259,14 +1259,9 @@ into the ``setup.py``, as such, whenever the frontend command line
interface changes, the static man pages should be regenerated and interface changes, the static man pages should be regenerated and
committed with that. committed with that.
To do this, first ensure you have ``click_man`` installed, possibly To do this, run the following from the the toplevel directory of BuildStream::
with::
pip3 install --user click_man tox -e man
Then, in the toplevel directory of buildstream, run the following::
python3 setup.py --command-packages=click_man.commands man_pages
And commit the result, ensuring that you have added anything in And commit the result, ensuring that you have added anything in
the ``man/`` subdirectory, which will be automatically included the ``man/`` subdirectory, which will be automatically included
......
...@@ -20,7 +20,7 @@ buildstream 1.3.1 ...@@ -20,7 +20,7 @@ buildstream 1.3.1
an element's sources and generated build scripts you can do the command an element's sources and generated build scripts you can do the command
`bst source-checkout --include-build-scripts --tar foo.bst some-file.tar` `bst source-checkout --include-build-scripts --tar foo.bst some-file.tar`
o BREAKING CHANGE: `bst track` and `bst fetch` commands are now osbolete. o BREAKING CHANGE: `bst track` and `bst fetch` commands are now obsolete.
Their functionality is provided by `bst source track` and Their functionality is provided by `bst source track` and
`bst source fetch` respectively. `bst source fetch` respectively.
...@@ -50,6 +50,11 @@ buildstream 1.3.1 ...@@ -50,6 +50,11 @@ buildstream 1.3.1
an error message and a hint instead, to avoid bothering folks that just an error message and a hint instead, to avoid bothering folks that just
made a mistake. made a mistake.
o BREAKING CHANGE: The unconditional 'Are you sure?' prompts have been
removed. These would always ask you if you were sure when running
'bst workspace close --remove-dir' or 'bst workspace reset'. They got in
the way too often.
o Failed builds are included in the cache as well. o Failed builds are included in the cache as well.
`bst checkout` will provide anything in `%{install-root}`. `bst checkout` will provide anything in `%{install-root}`.
A build including cached fails will cause any dependant elements A build including cached fails will cause any dependant elements
...@@ -87,12 +92,6 @@ buildstream 1.3.1 ...@@ -87,12 +92,6 @@ buildstream 1.3.1
instead of just a specially-formatted build-root with a `root` and `scratch` instead of just a specially-formatted build-root with a `root` and `scratch`
subdirectory. subdirectory.
o The buildstream.conf file learned new
'prompt.really-workspace-close-remove-dir' and
'prompt.really-workspace-reset-hard' options. These allow users to suppress
certain confirmation prompts, e.g. double-checking that the user meant to
run the command as typed.
o Due to the element `build tree` being cached in the respective artifact their o Due to the element `build tree` being cached in the respective artifact their
size in some cases has significantly increased. In *most* cases the build trees size in some cases has significantly increased. In *most* cases the build trees
are not utilised when building targets, as such by default bst 'pull' & 'build' are not utilised when building targets, as such by default bst 'pull' & 'build'
......
...@@ -98,6 +98,7 @@ class ArtifactCache(): ...@@ -98,6 +98,7 @@ class ArtifactCache():
self._cache_size = None # The current cache size, sometimes it's an estimate self._cache_size = None # The current cache size, sometimes it's an estimate
self._cache_quota = None # The cache quota self._cache_quota = None # The cache quota
self._cache_quota_original = None # The cache quota as specified by the user, in bytes self._cache_quota_original = None # The cache quota as specified by the user, in bytes
self._cache_quota_headroom = None # The headroom in bytes before reaching the quota or full disk
self._cache_lower_threshold = None # The target cache size for a cleanup self._cache_lower_threshold = None # The target cache size for a cleanup
self._remotes_setup = False # Check to prevent double-setup of remotes self._remotes_setup = False # Check to prevent double-setup of remotes
...@@ -314,7 +315,7 @@ class ArtifactCache(): ...@@ -314,7 +315,7 @@ class ArtifactCache():
len(self._required_elements), len(self._required_elements),
(context.config_origin or default_conf))) (context.config_origin or default_conf)))
if self.has_quota_exceeded(): if self.full():
raise ArtifactError("Cache too full. Aborting.", raise ArtifactError("Cache too full. Aborting.",
detail=detail, detail=detail,
reason="cache-too-full") reason="cache-too-full")
...@@ -431,15 +432,25 @@ class ArtifactCache(): ...@@ -431,15 +432,25 @@ class ArtifactCache():
self._cache_size = cache_size self._cache_size = cache_size
self._write_cache_size(self._cache_size) self._write_cache_size(self._cache_size)
# has_quota_exceeded() # full()
# #
# Checks if the current artifact cache size exceeds the quota. # Checks if the artifact cache is full, either
# because the user configured quota has been exceeded
# or because the underlying disk is almost full.
# #
# Returns: # Returns:
# (bool): True of the quota is exceeded # (bool): True if the artifact cache is full
# #
def has_quota_exceeded(self): def full(self):
return self.get_cache_size() > self._cache_quota
if self.get_cache_size() > self._cache_quota:
return True
_, volume_avail = self._get_cache_volume_size()
if volume_avail < self._cache_quota_headroom:
return True
return False
# preflight(): # preflight():
# #
...@@ -936,9 +947,9 @@ class ArtifactCache(): ...@@ -936,9 +947,9 @@ class ArtifactCache():
# is taken from the user requested cache_quota. # is taken from the user requested cache_quota.
# #
if 'BST_TEST_SUITE' in os.environ: if 'BST_TEST_SUITE' in os.environ:
headroom = 0 self._cache_quota_headroom = 0
else: else:
headroom = 2e9 self._cache_quota_headroom = 2e9
try: try:
cache_quota = utils._parse_size(self.context.config_cache_quota, cache_quota = utils._parse_size(self.context.config_cache_quota,
...@@ -960,27 +971,39 @@ class ArtifactCache(): ...@@ -960,27 +971,39 @@ class ArtifactCache():
# #
if cache_quota is None: # Infinity, set to max system storage if cache_quota is None: # Infinity, set to max system storage
cache_quota = cache_size + available_space cache_quota = cache_size + available_space
if cache_quota < headroom: # Check minimum if cache_quota < self._cache_quota_headroom: # Check minimum
raise LoadError(LoadErrorReason.INVALID_DATA, raise LoadError(LoadErrorReason.INVALID_DATA,
"Invalid cache quota ({}): ".format(utils._pretty_size(cache_quota)) + "Invalid cache quota ({}): ".format(utils._pretty_size(cache_quota)) +
"BuildStream requires a minimum cache quota of 2G.") "BuildStream requires a minimum cache quota of 2G.")
elif cache_quota > cache_size + available_space: # Check maximum elif cache_quota > total_size:
if '%' in self.context.config_cache_quota: # A quota greater than the total disk size is certianly an error
available = (available_space / total_size) * 100
available = '{}% of total disk space'.format(round(available, 1))
else:
available = utils._pretty_size(available_space)
raise ArtifactError("Your system does not have enough available " + raise ArtifactError("Your system does not have enough available " +
"space to support the cache quota specified.", "space to support the cache quota specified.",
detail=("You have specified a quota of {quota} total disk space.\n" + detail=("You have specified a quota of {quota} total disk space.\n" +
"The filesystem containing {local_cache_path} only " + "The filesystem containing {local_cache_path} only " +
"has {available_size} available.") "has {total_size} total disk space.")
.format( .format(
quota=self.context.config_cache_quota, quota=self.context.config_cache_quota,
local_cache_path=self.context.artifactdir, local_cache_path=self.context.artifactdir,
available_size=available), total_size=utils._pretty_size(total_size)),
reason='insufficient-storage-for-quota') reason='insufficient-storage-for-quota')
elif cache_quota > cache_size + available_space:
# The quota does not fit in the available space, this is a warning
if '%' in self.context.config_cache_quota:
available = (available_space / total_size) * 100
available = '{}% of total disk space'.format(round(available, 1))
else:
available = utils._pretty_size(available_space)
self._message(MessageType.WARN,
"Your system does not have enough available " +
"space to support the cache quota specified.",
detail=("You have specified a quota of {quota} total disk space.\n" +
"The filesystem containing {local_cache_path} only " +
"has {available_size} available.")
.format(quota=self.context.config_cache_quota,
local_cache_path=self.context.artifactdir,
available_size=available))
# Place a slight headroom (2e9 (2GB) on the cache_quota) into # Place a slight headroom (2e9 (2GB) on the cache_quota) into
# cache_quota to try and avoid exceptions. # cache_quota to try and avoid exceptions.
...@@ -990,7 +1013,7 @@ class ArtifactCache(): ...@@ -990,7 +1013,7 @@ class ArtifactCache():
# already really fuzzy. # already really fuzzy.
# #
self._cache_quota_original = cache_quota self._cache_quota_original = cache_quota
self._cache_quota = cache_quota - headroom self._cache_quota = cache_quota - self._cache_quota_headroom
self._cache_lower_threshold = self._cache_quota / 2 self._cache_lower_threshold = self._cache_quota / 2
# _get_cache_volume_size() # _get_cache_volume_size()
......
...@@ -324,7 +324,7 @@ class _ContentAddressableStorageServicer(remote_execution_pb2_grpc.ContentAddres ...@@ -324,7 +324,7 @@ class _ContentAddressableStorageServicer(remote_execution_pb2_grpc.ContentAddres
blob_response.digest.size_bytes = digest.size_bytes blob_response.digest.size_bytes = digest.size_bytes
if len(blob_request.data) != digest.size_bytes: if len(blob_request.data) != digest.size_bytes:
blob_response.status.code = grpc.StatusCode.FAILED_PRECONDITION blob_response.status.code = code_pb2.FAILED_PRECONDITION
continue continue
try: try:
...@@ -335,10 +335,10 @@ class _ContentAddressableStorageServicer(remote_execution_pb2_grpc.ContentAddres ...@@ -335,10 +335,10 @@ class _ContentAddressableStorageServicer(remote_execution_pb2_grpc.ContentAddres
out.flush() out.flush()
server_digest = self.cas.add_object(path=out.name) server_digest = self.cas.add_object(path=out.name)
if server_digest.hash != digest.hash: if server_digest.hash != digest.hash:
blob_response.status.code = grpc.StatusCode.FAILED_PRECONDITION blob_response.status.code = code_pb2.FAILED_PRECONDITION
except ArtifactTooLargeException: except ArtifactTooLargeException:
blob_response.status.code = grpc.StatusCode.RESOURCE_EXHAUSTED blob_response.status.code = code_pb2.RESOURCE_EXHAUSTED
return response return response
......
...@@ -121,18 +121,10 @@ class Context(): ...@@ -121,18 +121,10 @@ class Context():
# Whether or not to attempt to pull build trees globally # Whether or not to attempt to pull build trees globally
self.pull_buildtrees = None self.pull_buildtrees = None
# Boolean, whether we double-check with the user that they meant to
# remove a workspace directory.
self.prompt_workspace_close_remove_dir = None
# Boolean, whether we double-check with the user that they meant to # Boolean, whether we double-check with the user that they meant to
# close the workspace when they're using it to access the project. # close the workspace when they're using it to access the project.
self.prompt_workspace_close_project_inaccessible = None self.prompt_workspace_close_project_inaccessible = None
# Boolean, whether we double-check with the user that they meant to do
# a hard reset of a workspace, potentially losing changes.
self.prompt_workspace_reset_hard = None
# Whether elements must be rebuilt when their dependencies have changed # Whether elements must be rebuilt when their dependencies have changed
self._strict_build_plan = None self._strict_build_plan = None
...@@ -260,16 +252,10 @@ class Context(): ...@@ -260,16 +252,10 @@ class Context():
prompt = _yaml.node_get( prompt = _yaml.node_get(
defaults, Mapping, 'prompt') defaults, Mapping, 'prompt')
_yaml.node_validate(prompt, [ _yaml.node_validate(prompt, [
'really-workspace-close-remove-dir',
'really-workspace-close-project-inaccessible', 'really-workspace-close-project-inaccessible',
'really-workspace-reset-hard',
]) ])
self.prompt_workspace_close_remove_dir = _node_get_option_str(
prompt, 'really-workspace-close-remove-dir', ['ask', 'yes']) == 'ask'
self.prompt_workspace_close_project_inaccessible = _node_get_option_str( self.prompt_workspace_close_project_inaccessible = _node_get_option_str(
prompt, 'really-workspace-close-project-inaccessible', ['ask', 'yes']) == 'ask' prompt, 'really-workspace-close-project-inaccessible', ['ask', 'yes']) == 'ask'
self.prompt_workspace_reset_hard = _node_get_option_str(
prompt, 'really-workspace-reset-hard', ['ask', 'yes']) == 'ask'
# Load per-projects overrides # Load per-projects overrides
self._project_overrides = _yaml.node_get(defaults, Mapping, 'projects', default_value={}) self._project_overrides = _yaml.node_get(defaults, Mapping, 'projects', default_value={})
......
...@@ -526,7 +526,7 @@ def shell(app, element, sysroot, mount, isolate, build_, cli_buildtree, command) ...@@ -526,7 +526,7 @@ def shell(app, element, sysroot, mount, isolate, build_, cli_buildtree, command)
else: else:
scope = Scope.RUN scope = Scope.RUN
use_buildtree = False use_buildtree = None
with app.initialized(): with app.initialized():
if not element: if not element:
...@@ -534,7 +534,8 @@ def shell(app, element, sysroot, mount, isolate, build_, cli_buildtree, command) ...@@ -534,7 +534,8 @@ def shell(app, element, sysroot, mount, isolate, build_, cli_buildtree, command)
if not element: if not element:
raise AppError('Missing argument "ELEMENT".') raise AppError('Missing argument "ELEMENT".')
dependencies = app.stream.load_selection((element,), selection=PipelineSelection.NONE) dependencies = app.stream.load_selection((element,), selection=PipelineSelection.NONE,
use_artifact_config=True)
element = dependencies[0] element = dependencies[0]
prompt = app.shell_prompt(element) prompt = app.shell_prompt(element)
mounts = [ mounts = [
...@@ -543,20 +544,31 @@ def shell(app, element, sysroot, mount, isolate, build_, cli_buildtree, command) ...@@ -543,20 +544,31 @@ def shell(app, element, sysroot, mount, isolate, build_, cli_buildtree, command)
] ]
cached = element._cached_buildtree() cached = element._cached_buildtree()
if cli_buildtree == "always": if cli_buildtree in ("always", "try"):
if cached: use_buildtree = cli_buildtree
use_buildtree = True if not cached and use_buildtree == "always":
else: click.echo("WARNING: buildtree is not cached locally, will attempt to pull from available remotes",
raise AppError("No buildtree is cached but the use buildtree option was specified") err=True)
elif cli_buildtree == "never":
pass
elif cli_buildtree == "try":
use_buildtree = cached
else: else:
if app.interactive and cached: # If the value has defaulted to ask and in non interactive mode, don't consider the buildtree, this
use_buildtree = bool(click.confirm('Do you want to use the cached buildtree?')) # being the default behaviour of the command
if app.interactive and cli_buildtree == "ask":
if cached and bool(click.confirm('Do you want to use the cached buildtree?')):
use_buildtree = "always"
elif not cached:
try:
choice = click.prompt("Do you want to pull & use a cached buildtree?",
type=click.Choice(['try', 'always', 'never']),
err=True, show_choices=True)
except click.Abort:
click.echo('Aborting', err=True)
sys.exit(-1)
if choice != "never":
use_buildtree = choice
if use_buildtree and not element._cached_success(): if use_buildtree and not element._cached_success():
click.echo("Warning: using a buildtree from a failed build.") click.echo("WARNING: using a buildtree from a failed build.", err=True)
try: try:
exitcode = app.stream.shell(element, scope, prompt, exitcode = app.stream.shell(element, scope, prompt,
...@@ -829,11 +841,6 @@ def workspace_close(app, remove_dir, all_, elements): ...@@ -829,11 +841,6 @@ def workspace_close(app, remove_dir, all_, elements):
if nonexisting: if nonexisting:
raise AppError("Workspace does not exist", detail="\n".join(nonexisting)) raise AppError("Workspace does not exist", detail="\n".join(nonexisting))
if app.interactive and remove_dir and app.context.prompt_workspace_close_remove_dir:
if not click.confirm('This will remove all your changes, are you sure?'):
click.echo('Aborting', err=True)
sys.exit(-1)
for element_name in elements: for element_name in elements:
app.stream.workspace_close(element_name, remove_dir=remove_dir) app.stream.workspace_close(element_name, remove_dir=remove_dir)
...@@ -867,11 +874,6 @@ def workspace_reset(app, soft, track_, all_, elements): ...@@ -867,11 +874,6 @@ def workspace_reset(app, soft, track_, all_, elements):
if all_ and not app.stream.workspace_exists(): if all_ and not app.stream.workspace_exists():
raise AppError("No open workspaces to reset") raise AppError("No open workspaces to reset")
if app.interactive and not soft and app.context.prompt_workspace_reset_hard:
if not click.confirm('This will remove all your changes, are you sure?'):
click.echo('Aborting', err=True)
sys.exit(-1)
if all_: if all_:
elements = tuple(element_name for element_name, _ in app.context.get_workspaces().list()) elements = tuple(element_name for element_name, _ in app.context.get_workspaces().list())
......
...@@ -605,7 +605,7 @@ class _GitSourceBase(Source): ...@@ -605,7 +605,7 @@ class _GitSourceBase(Source):
detail = "The ref provided for the element does not exist locally " + \ detail = "The ref provided for the element does not exist locally " + \
"in the provided track branch / tag '{}'.\n".format(self.tracking) + \ "in the provided track branch / tag '{}'.\n".format(self.tracking) + \
"You may wish to track the element to update the ref from '{}' ".format(self.tracking) + \ "You may wish to track the element to update the ref from '{}' ".format(self.tracking) + \
"with `bst track`,\n" + \ "with `bst source track`,\n" + \
"or examine the upstream at '{}' for the specific ref.".format(self.mirror.url) "or examine the upstream at '{}' for the specific ref.".format(self.mirror.url)
self.warn("{}: expected ref '{}' was not found in given track '{}' for staged repository: '{}'\n" self.warn("{}: expected ref '{}' was not found in given track '{}' for staged repository: '{}'\n"
......
...@@ -557,7 +557,7 @@ class Loader(): ...@@ -557,7 +557,7 @@ class Loader():
ticker(filename, 'Fetching subproject from {} source'.format(source.get_kind())) ticker(filename, 'Fetching subproject from {} source'.format(source.get_kind()))
source._fetch(sources[0:idx]) source._fetch(sources[0:idx])
else: else:
detail = "Try fetching the project with `bst fetch {}`".format(filename) detail = "Try fetching the project with `bst source fetch {}`".format(filename)
raise LoadError(LoadErrorReason.SUBPROJECT_FETCH_NEEDED, raise LoadError(LoadErrorReason.SUBPROJECT_FETCH_NEEDED,
"Subproject fetch needed for junction: {}".format(filename), "Subproject fetch needed for junction: {}".format(filename),
detail=detail) detail=detail)
...@@ -565,7 +565,7 @@ class Loader(): ...@@ -565,7 +565,7 @@ class Loader():
# Handle the case where a subproject has no ref # Handle the case where a subproject has no ref
# #
elif source.get_consistency() == Consistency.INCONSISTENT: elif source.get_consistency() == Consistency.INCONSISTENT:
detail = "Try tracking the junction element with `bst track {}`".format(filename) detail = "Try tracking the junction element with `bst source track {}`".format(filename)
raise LoadError(LoadErrorReason.SUBPROJECT_INCONSISTENT, raise LoadError(LoadErrorReason.SUBPROJECT_INCONSISTENT,
"Subproject has no ref for junction: {}".format(filename), "Subproject has no ref for junction: {}".format(filename),
detail=detail) detail=detail)
......
...@@ -373,7 +373,7 @@ class Pipeline(): ...@@ -373,7 +373,7 @@ class Pipeline():
if source._get_consistency() == Consistency.INCONSISTENT: if source._get_consistency() == Consistency.INCONSISTENT:
detail += " {} is missing ref\n".format(source) detail += " {} is missing ref\n".format(source)
detail += '\n' detail += '\n'
detail += "Try tracking these elements first with `bst track`\n" detail += "Try tracking these elements first with `bst source track`\n"
raise PipelineError("Inconsistent pipeline", detail=detail, reason="inconsistent-pipeline") raise PipelineError("Inconsistent pipeline", detail=detail, reason="inconsistent-pipeline")
...@@ -406,7 +406,7 @@ class Pipeline(): ...@@ -406,7 +406,7 @@ class Pipeline():
if source._get_consistency() != Consistency.CACHED: if source._get_consistency() != Consistency.CACHED:
detail += " {}\n".format(source) detail += " {}\n".format(source)
detail += '\n' detail += '\n'
detail += "Try fetching these elements first with `bst fetch`,\n" + \ detail += "Try fetching these elements first with `bst source fetch`,\n" + \
"or run this command with `--fetch` option\n" "or run this command with `--fetch` option\n"
raise PipelineError("Uncached sources", detail=detail, reason="uncached-sources") raise PipelineError("Uncached sources", detail=detail, reason="uncached-sources")
......
# #
# Copyright (C) 2017 Codethink Limited # Copyright (C) 2017 Codethink Limited
# Copyright (C) 2019 Bloomberg Finance LP
# #
# This program is free software; you can redistribute it and/or # This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public # modify it under the terms of the GNU Lesser General Public
...@@ -16,6 +17,7 @@ ...@@ -16,6 +17,7 @@
# #
# Authors: # Authors:
# Tristan Van Berkom <tristan.vanberkom@codethink.co.uk> # Tristan Van Berkom <tristan.vanberkom@codethink.co.uk>
# James Ennis <james.ennis@codethink.co.uk>
import cProfile import cProfile
import pstats import pstats
...@@ -46,6 +48,8 @@ class Topics(): ...@@ -46,6 +48,8 @@ class Topics():
LOAD_CONTEXT = 'load-context' LOAD_CONTEXT = 'load-context'
LOAD_PROJECT = 'load-project' LOAD_PROJECT = 'load-project'
LOAD_PIPELINE = 'load-pipeline' LOAD_PIPELINE = 'load-pipeline'
LOAD_SELECTION = 'load-selection'
SCHEDULER = 'scheduler'
SHOW = 'show' SHOW = 'show'
ARTIFACT_RECEIVE = 'artifact-receive' ARTIFACT_RECEIVE = 'artifact-receive'
ALL = 'all' ALL = 'all'
......
...@@ -100,7 +100,7 @@ class BuildQueue(Queue): ...@@ -100,7 +100,7 @@ class BuildQueue(Queue):
# If the estimated size outgrows the quota, ask the scheduler # If the estimated size outgrows the quota, ask the scheduler
# to queue a job to actually check the real cache size. # to queue a job to actually check the real cache size.
# #
if artifacts.has_quota_exceeded(): if artifacts.full():
self._scheduler.check_cache_size() self._scheduler.check_cache_size()
def done(self, job, element, result, status): def done(self, job, element, result, status):
......
...@@ -29,6 +29,7 @@ from contextlib import contextmanager ...@@ -29,6 +29,7 @@ from contextlib import contextmanager
# Local imports # Local imports
from .resources import Resources, ResourceType from .resources import Resources, ResourceType
from .jobs import JobStatus, CacheSizeJob, CleanupJob from .jobs import JobStatus, CacheSizeJob, CleanupJob
from .._profile import Topics, profile_start, profile_end
# A decent return code for Scheduler.run() # A decent return code for Scheduler.run()
...@@ -154,11 +155,16 @@ class Scheduler(): ...@@ -154,11 +155,16 @@ class Scheduler():
# Check if we need to start with some cache maintenance # Check if we need to start with some cache maintenance
self._check_cache_management() self._check_cache_management()
# Start the profiler
profile_start(Topics.SCHEDULER, "_".join(queue.action_name for queue in self.queues))
# Run the queues # Run the queues
self._sched() self._sched()
self.loop.run_forever() self.loop.run_forever()
self.loop.close() self.loop.close()
profile_end(Topics.SCHEDULER, "_".join(queue.action_name for queue in self.queues))
# Stop handling unix signals # Stop handling unix signals
self._disconnect_signals() self._disconnect_signals()
...@@ -297,7 +303,7 @@ class Scheduler(): ...@@ -297,7 +303,7 @@ class Scheduler():
# starts while we are checking the cache. # starts while we are checking the cache.
# #
artifacts = self.context.artifactcache artifacts = self.context.artifactcache
if artifacts.has_quota_exceeded(): if artifacts.full():
self._sched_cache_size_job(exclusive=True) self._sched_cache_size_job(exclusive=True)
# _spawn_job() # _spawn_job()
...@@ -308,10 +314,10 @@ class Scheduler(): ...@@ -308,10 +314,10 @@ class Scheduler():
# job (Job): The job to spawn # job (Job): The job to spawn
# #
def _spawn_job(self, job): def _spawn_job(self, job):
job.spawn()
self._active_jobs.append(job) self._active_jobs.append(job)
if self._job_start_callback: if self._job_start_callback:
self._job_start_callback(job) self._job_start_callback(job)
job.spawn()
# Callback for the cache size job # Callback for the cache size job
def _cache_size_job_complete(self, status, cache_size): def _cache_size_job_complete(self, status, cache_size):
...@@ -332,7 +338,7 @@ class Scheduler(): ...@@ -332,7 +338,7 @@ class Scheduler():
context = self.context context = self.context
artifacts = context.artifactcache artifacts = context.artifactcache
if artifacts.has_quota_exceeded(): if artifacts.full():
self._cleanup_scheduled = True self._cleanup_scheduled = True
# Callback for the cleanup job # Callback for the cleanup job
......
...@@ -32,6 +32,7 @@ from ._exceptions import StreamError, ImplError, BstError, set_last_task_error ...@@ -32,6 +32,7 @@ from ._exceptions import StreamError, ImplError, BstError, set_last_task_error
from ._message import Message, MessageType from ._message import Message, MessageType
from ._scheduler import Scheduler, SchedStatus, TrackQueue, FetchQueue, BuildQueue, PullQueue, PushQueue from ._scheduler import Scheduler, SchedStatus, TrackQueue, FetchQueue, BuildQueue, PullQueue, PushQueue
from ._pipeline import Pipeline, PipelineSelection from ._pipeline import Pipeline, PipelineSelection
from ._profile import Topics, profile_start, profile_end
from . import utils, _yaml, _site from . import utils, _yaml, _site
from . import Scope, Consistency from . import Scope, Consistency
...@@ -100,16 +101,25 @@ class Stream(): ...@@ -100,16 +101,25 @@ class Stream():
# targets (list of str): Targets to pull # targets (list of str): Targets to pull
# selection (PipelineSelection): The selection mode for the specified targets # selection (PipelineSelection): The selection mode for the specified targets
# except_targets (list of str): Specified targets to except from fetching # except_targets (list of str): Specified targets to except from fetching
# use_artifact_config (bool): If artifact remote config should be loaded
# #
# Returns: # Returns:
# (list of Element): The selected elements # (list of Element): The selected elements
def load_selection(self, targets, *, def load_selection(self, targets, *,
selection=PipelineSelection.NONE, selection=PipelineSelection.NONE,
except_targets=()): except_targets=(),
use_artifact_config=False):
profile_start(Topics.LOAD_SELECTION, "_".join(t.replace(os.sep, '-') for t in targets))
elements, _ = self._load(targets, (), elements, _ = self._load(targets, (),
selection=selection, selection=selection,
except_targets=except_targets, except_targets=except_targets,
fetch_subprojects=False) fetch_subprojects=False,
use_artifact_config=use_artifact_config)
profile_end(Topics.LOAD_SELECTION, "_".join(t.replace(os.sep, '-') for t in targets))
return elements return elements
# shell() # shell()
...@@ -124,7 +134,7 @@ class Stream(): ...@@ -124,7 +134,7 @@ class Stream():
# mounts (list of HostMount): Additional directories to mount into the sandbox # mounts (list of HostMount): Additional directories to mount into the sandbox
# isolate (bool): Whether to isolate the environment like we do in builds # isolate (bool): Whether to isolate the environment like we do in builds
# command (list): An argv to launch in the sandbox, or None # command (list): An argv to launch in the sandbox, or None
# usebuildtree (bool): Wheather to use a buildtree as the source. # usebuildtree (str): Whether to use a buildtree as the source, given cli option
# #
# Returns: # Returns:
# (int): The exit code of the launched shell # (int): The exit code of the launched shell
...@@ -134,7 +144,7 @@ class Stream(): ...@@ -134,7 +144,7 @@ class Stream():
mounts=None, mounts=None,
isolate=False, isolate=False,
command=None, command=None,
usebuildtree=False): usebuildtree=None):
# Assert we have everything we need built, unless the directory is specified # Assert we have everything we need built, unless the directory is specified
# in which case we just blindly trust the directory, using the element # in which case we just blindly trust the directory, using the element
...@@ -149,8 +159,31 @@ class Stream(): ...@@ -149,8 +159,31 @@ class Stream():
raise StreamError("Elements need to be built or downloaded before staging a shell environment", raise StreamError("Elements need to be built or downloaded before staging a shell environment",
detail="\n".join(missing_deps)) detail="\n".join(missing_deps))
buildtree = False
# Check if we require a pull queue attempt, with given artifact state and context
if usebuildtree:
if not element._cached_buildtree():
require_buildtree = self._buildtree_pull_required([element])
# Attempt a pull queue for the given element if remote and context allow it
if require_buildtree:
self._message(MessageType.INFO, "Attempting to fetch missing artifact buildtree")
self._add_queue(PullQueue(self._scheduler))
self._enqueue_plan(require_buildtree)
self._run()
# Now check if the buildtree was successfully fetched
if element._cached_buildtree():
buildtree = True
if not buildtree:
if usebuildtree == "always":
raise StreamError("Buildtree is not cached locally or in available remotes")
else:
self._message(MessageType.INFO, """Buildtree is not cached locally or in available remotes,
shell will be loaded without it""")
else:
buildtree = True
return element._shell(scope, directory, mounts=mounts, isolate=isolate, prompt=prompt, command=command, return element._shell(scope, directory, mounts=mounts, isolate=isolate, prompt=prompt, command=command,
usebuildtree=usebuildtree) usebuildtree=buildtree)
# build() # build()
# #
......
...@@ -112,14 +112,6 @@ logging: ...@@ -112,14 +112,6 @@ logging:
# #
prompt: prompt:
# Whether to really proceed with 'bst workspace close --remove-dir' removing
# a workspace directory, potentially losing changes.
#
# ask - Ask the user if they are sure.
# yes - Always remove, without asking.
#
really-workspace-close-remove-dir: ask
# Whether to really proceed with 'bst workspace close' when doing so would # Whether to really proceed with 'bst workspace close' when doing so would
# stop them from running bst commands in this workspace. # stop them from running bst commands in this workspace.
# #
...@@ -127,11 +119,3 @@ prompt: ...@@ -127,11 +119,3 @@ prompt:
# yes - Always close, without asking. # yes - Always close, without asking.
# #
really-workspace-close-project-inaccessible: ask really-workspace-close-project-inaccessible: ask
# Whether to really proceed with 'bst workspace reset' doing a hard reset of
# a workspace, potentially losing changes.
#
# ask - Ask the user if they are sure.
# yes - Always hard reset, without asking.
#
really-workspace-reset-hard: ask
...@@ -2510,9 +2510,30 @@ class Element(Plugin): ...@@ -2510,9 +2510,30 @@ class Element(Plugin):
exclude = [] exclude = []
# Ignore domains that dont apply to this element # Ignore domains that dont apply to this element
# # - In the case of the filter element, we can explicitly
include = [domain for domain in include if domain in element_domains] # declare in domains we want to include/exclude.
exclude = [domain for domain in exclude if domain in element_domains] unfound_domains = []
include_domains = []
for domain in include:
if domain not in element_domains:
unfound_domains.append(domain)
else:
include_domains.append(domain)
exclude_domains = []
for domain in exclude:
if domain not in element_domains:
unfound_domains.append(domain)
else:
exclude_domains.append(domain)
# Print a warning to the user if we have unfound domains
if unfound_domains:
warning_detail = "Split-domains: {} not provided by '{}'.\n" \
"Known split domains: {}".format(unfound_domains, self.name, element_domains)
self.warn("Split domains: {} not found.".format(unfound_domains),
detail=warning_detail,
warning_token=CoreWarnings.DOMAIN_NOT_FOUND)
# FIXME: Instead of listing the paths in an extracted artifact, # FIXME: Instead of listing the paths in an extracted artifact,
# we should be using a manifest loaded from the artifact # we should be using a manifest loaded from the artifact
...@@ -2531,9 +2552,9 @@ class Element(Plugin): ...@@ -2531,9 +2552,9 @@ class Element(Plugin):
for domain in element_domains: for domain in element_domains:
if self.__splits[domain].match(filename): if self.__splits[domain].match(filename):
claimed_file = True claimed_file = True
if domain in include: if domain in include_domains:
include_file = True include_file = True
if domain in exclude: if domain in exclude_domains:
exclude_file = True exclude_file = True
if orphans and not claimed_file: if orphans and not claimed_file:
......
...@@ -47,6 +47,8 @@ from buildstream import Element, ElementError, Scope ...@@ -47,6 +47,8 @@ from buildstream import Element, ElementError, Scope
class FilterElement(Element): class FilterElement(Element):
# pylint: disable=attribute-defined-outside-init # pylint: disable=attribute-defined-outside-init
BST_ARTIFACT_VERSION = 1
# The filter element's output is its dependencies, so # The filter element's output is its dependencies, so
# we must rebuild if the dependencies change even when # we must rebuild if the dependencies change even when
# not in strict build plans. # not in strict build plans.
...@@ -102,7 +104,7 @@ class FilterElement(Element): ...@@ -102,7 +104,7 @@ class FilterElement(Element):
def assemble(self, sandbox): def assemble(self, sandbox):
with self.timed_activity("Staging artifact", silent_nested=True): with self.timed_activity("Staging artifact", silent_nested=True):
for dep in self.dependencies(Scope.BUILD): for dep in self.dependencies(Scope.BUILD, recurse=False):
dep.stage_artifact(sandbox, include=self.include, dep.stage_artifact(sandbox, include=self.include,
exclude=self.exclude, orphans=self.include_orphans) exclude=self.exclude, orphans=self.include_orphans)
return "" return ""
......
...@@ -79,7 +79,7 @@ depend on a junction element itself. ...@@ -79,7 +79,7 @@ depend on a junction element itself.
Therefore, if you require the most up-to-date version of a subproject, Therefore, if you require the most up-to-date version of a subproject,
you must explicitly track the junction element by invoking: you must explicitly track the junction element by invoking:
`bst track JUNCTION_ELEMENT`. `bst source track JUNCTION_ELEMENT`.
Furthermore, elements within the subproject are also not tracked by default. Furthermore, elements within the subproject are also not tracked by default.
For this, we must specify the `--track-cross-junctions` option. This option For this, we must specify the `--track-cross-junctions` option. This option
...@@ -93,7 +93,7 @@ cached yet. However, they can be fetched explicitly: ...@@ -93,7 +93,7 @@ cached yet. However, they can be fetched explicitly:
.. code:: .. code::
bst fetch junction.bst bst source fetch junction.bst
Other commands such as ``bst build`` implicitly fetch junction sources. Other commands such as ``bst build`` implicitly fetch junction sources.
...@@ -146,7 +146,7 @@ class JunctionElement(Element): ...@@ -146,7 +146,7 @@ class JunctionElement(Element):
def get_unique_key(self): def get_unique_key(self):
# Junctions do not produce artifacts. get_unique_key() implementation # Junctions do not produce artifacts. get_unique_key() implementation
# is still required for `bst fetch`. # is still required for `bst source fetch`.
return 1 return 1
def configure_sandbox(self, sandbox): def configure_sandbox(self, sandbox):
......
...@@ -46,7 +46,7 @@ bzr - stage files from a bazaar repository ...@@ -46,7 +46,7 @@ bzr - stage files from a bazaar repository
# but revisions on a branch are of the form # but revisions on a branch are of the form
# <revision-branched-from>.<branch-number>.<revision-since-branching> # <revision-branched-from>.<branch-number>.<revision-since-branching>
# e.g. 6622.1.6. # e.g. 6622.1.6.
# The ref must be specified to build, and 'bst track' will update the # The ref must be specified to build, and 'bst source track' will update the
# revision number to the one on the tip of the branch specified in 'track'. # revision number to the one on the tip of the branch specified in 'track'.
ref: 6622 ref: 6622
......
...@@ -34,7 +34,7 @@ deb - stage files from .deb packages ...@@ -34,7 +34,7 @@ deb - stage files from .deb packages
kind: deb kind: deb
# Specify the deb url. Using an alias defined in your project # Specify the deb url. Using an alias defined in your project
# configuration is encouraged. 'bst track' will update the # configuration is encouraged. 'bst source track' will update the
# sha256sum in 'ref' to the downloaded file's sha256sum. # sha256sum in 'ref' to the downloaded file's sha256sum.
url: upstream:foo.deb url: upstream:foo.deb
......
...@@ -112,7 +112,7 @@ git - stage files from a git repository ...@@ -112,7 +112,7 @@ git - stage files from a git repository
# o Enable `track-tags` feature # o Enable `track-tags` feature
# o Set the `track` parameter to the desired commit sha which # o Set the `track` parameter to the desired commit sha which
# the current `ref` points to # the current `ref` points to
# o Run `bst track` for these elements, this will result in # o Run `bst source track` for these elements, this will result in
# populating the `tags` portion of the refs without changing # populating the `tags` portion of the refs without changing
# the refs # the refs
# o Restore the `track` parameter to the branches which you have # o Restore the `track` parameter to the branches which you have
......