Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • willsalmon/buildstream
  • CumHoleZH/buildstream
  • tchaik/buildstream
  • DCotyPortfolio/buildstream
  • jesusoctavioas/buildstream
  • patrickmmartin/buildstream
  • franred/buildstream
  • tintou/buildstream
  • alatiera/buildstream
  • martinblanchard/buildstream
  • neverdie22042524/buildstream
  • Mattlk13/buildstream
  • PServers/buildstream
  • phamnghia610909/buildstream
  • chiaratolentino/buildstream
  • eysz7-x-x/buildstream
  • kerrick1/buildstream
  • matthew-yates/buildstream
  • twofeathers/buildstream
  • mhadjimichael/buildstream
  • pointswaves/buildstream
  • Mr.JackWilson/buildstream
  • Tw3akG33k/buildstream
  • AlexFazakas/buildstream
  • eruidfkiy/buildstream
  • clamotion2/buildstream
  • nanonyme/buildstream
  • wickyjaaa/buildstream
  • nmanchev/buildstream
  • bojorquez.ja/buildstream
  • mostynb/buildstream
  • highpit74/buildstream
  • Demo112/buildstream
  • ba2014sheer/buildstream
  • tonimadrino/buildstream
  • usuario2o/buildstream
  • Angelika123456/buildstream
  • neo355/buildstream
  • corentin-ferlay/buildstream
  • coldtom/buildstream
  • wifitvbox81/buildstream
  • 358253885/buildstream
  • seanborg/buildstream
  • SotK/buildstream
  • DouglasWinship/buildstream
  • karansthr97/buildstream
  • louib/buildstream
  • bwh-ct/buildstream
  • robjh/buildstream
  • we88c0de/buildstream
  • zhengxian5555/buildstream
51 results
Show changes
Commits on Source (16)
Showing
with 840 additions and 125 deletions
......@@ -97,7 +97,13 @@ a new merge request. You can also `create a merge request for an existing branch
You may open merge requests for the branches you create before you are ready
to have them reviewed and considered for inclusion if you like. Until your merge
request is ready for review, the merge request title must be prefixed with the
``WIP:`` identifier.
``WIP:`` identifier. GitLab `treats this specially
<https://docs.gitlab.com/ee/user/project/merge_requests/work_in_progress_merge_requests.html>`_,
which helps reviewers.
Consider marking a merge request as WIP again if you are taking a while to
address a review point. This signals that the next action is on you, and it
won't appear in a reviewer's search for non-WIP merge requests to review.
Organized commits
......@@ -122,6 +128,12 @@ If a commit in your branch modifies behavior such that a test must also
be changed to match the new behavior, then the tests should be updated
with the same commit, so that every commit passes its own tests.
These principles apply whenever a branch is non-WIP. So for example, don't push
'fixup!' commits when addressing review comments, instead amend the commits
directly before pushing. GitLab has `good support
<https://docs.gitlab.com/ee/user/project/merge_requests/versions.html>`_ for
diffing between pushes, so 'fixup!' commits are not necessary for reviewers.
Commit messages
~~~~~~~~~~~~~~~
......@@ -144,6 +156,16 @@ number must be referenced in the commit message.
Fixes #123
Note that the 'why' of a change is as important as the 'what'.
When reviewing this, folks can suggest better alternatives when they know the
'why'. Perhaps there are other ways to avoid an error when things are not
frobnicated.
When folks modify this code, there may be uncertainty around whether the foos
should always be frobnicated. The comments, the commit message, and issue #123
should shed some light on that.
In the case that you have a commit which necessarily modifies multiple
components, then the summary line should still mention generally what
changed (if possible), followed by a colon and a brief summary.
......
......@@ -937,15 +937,22 @@ class ArtifactCache():
"Invalid cache quota ({}): ".format(utils._pretty_size(cache_quota)) +
"BuildStream requires a minimum cache quota of 2G.")
elif cache_quota > cache_size + available_space: # Check maximum
if '%' in self.context.config_cache_quota:
available = (available_space / (stat.f_blocks * stat.f_bsize)) * 100
available = '{}% of total disk space'.format(round(available, 1))
else:
available = utils._pretty_size(available_space)
raise LoadError(LoadErrorReason.INVALID_DATA,
("Your system does not have enough available " +
"space to support the cache quota specified.\n" +
"You currently have:\n" +
"- {used} of cache in use at {local_cache_path}\n" +
"- {available} of available system storage").format(
used=utils._pretty_size(cache_size),
local_cache_path=self.context.artifactdir,
available=utils._pretty_size(available_space)))
"\nYou have specified a quota of {quota} total disk space.\n" +
"- The filesystem containing {local_cache_path} only " +
"has: {available_size} available.")
.format(
quota=self.context.config_cache_quota,
local_cache_path=self.context.artifactdir,
available_size=available))
# Place a slight headroom (2e9 (2GB) on the cache_quota) into
# cache_quota to try and avoid exceptions.
......
......@@ -111,6 +111,7 @@ Class Reference
import os
import subprocess
import sys
from contextlib import contextmanager
from weakref import WeakValueDictionary
......@@ -190,7 +191,7 @@ class Plugin():
# Dont send anything through the Message() pipeline at destruction time,
# any subsequent lookup of plugin by unique id would raise KeyError.
if self.__context.log_debug:
print("DEBUG: Destroyed: {}".format(self))
sys.stderr.write("DEBUG: Destroyed: {}\n".format(self))
def __str__(self):
return "{kind} {typetag} at {provenance}".format(
......
......@@ -202,7 +202,7 @@ class ScriptElement(Element):
sandbox.set_environment(self.get_environment())
# Tell the sandbox to mount the install root
directories = {'/': False}
directories = {self.__install_root: False}
# Mark the artifact directories in the layout
for item in self.__layout:
......@@ -211,7 +211,10 @@ class ScriptElement(Element):
directories[destination] = item['element'] or was_artifact
for directory, artifact in directories.items():
sandbox.mark_directory(directory, artifact=artifact)
# Root does not need to be marked as it is always mounted
# with artifact (unless explicitly marked non-artifact)
if directory != '/':
sandbox.mark_directory(directory, artifact=artifact)
def stage(self, sandbox):
......
......@@ -973,32 +973,34 @@ class Source(Plugin):
# the items of source_fetchers, if it happens to be a generator.
#
source_fetchers = iter(source_fetchers)
try:
while True:
while True:
with context.silence():
with context.silence():
try:
fetcher = next(source_fetchers)
alias = fetcher._get_alias()
for uri in project.get_alias_uris(alias, first_pass=self.__first_pass):
try:
fetcher.fetch(uri)
# FIXME: Need to consider temporary vs. permanent failures,
# and how this works with retries.
except BstError as e:
last_error = e
continue
# No error, we're done with this fetcher
except StopIteration:
# as per PEP479, we are not allowed to let StopIteration
# thrown from a context manager.
# Catching it here and breaking instead.
break
else:
# No break occurred, raise the last detected error
raise last_error
alias = fetcher._get_alias()
for uri in project.get_alias_uris(alias, first_pass=self.__first_pass):
try:
fetcher.fetch(uri)
# FIXME: Need to consider temporary vs. permanent failures,
# and how this works with retries.
except BstError as e:
last_error = e
continue
except StopIteration:
pass
# No error, we're done with this fetcher
break
else:
# No break occurred, raise the last detected error
raise last_error
# Default codepath is to reinstantiate the Source
#
......
......@@ -147,6 +147,44 @@ The default mirror is defined by its name, e.g.
``--default-mirror`` command-line option.
Local cache expiry
~~~~~~~~~~~~~~~~~~
BuildStream locally caches artifacts, build trees, log files and sources within a
cache located at ``~/.cache/buildstream`` (unless a $XDG_CACHE_HOME environment
variable exists). When building large projects, this cache can get very large,
thus BuildStream will attempt to clean up the cache automatically by expiring the least
recently *used* artifacts.
By default, cache expiry will begin once the file system which contains the cache
approaches maximum usage. However, it is also possible to impose a quota on the local
cache in the user configuration. This can be done in two ways:
1. By restricting the maximum size of the cache directory itself.
For example, to ensure that BuildStream's cache does not grow beyond 100 GB,
simply declare the following in your user configuration (``~/.config/buildstream.conf``):
.. code:: yaml
cache:
quota: 100G
This quota defines the maximum size of the artifact cache in bytes.
Other accepted values are: K, M, G or T (or you can simply declare the value in bytes, without the suffix).
This uses the same format as systemd's
`resource-control <https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html>`_.
2. By expiring artifacts once the file system which contains the cache exceeds a specified usage.
To ensure that we start cleaning the cache once we've used 80% of local disk space (on the file system
which mounts the cache):
.. code:: yaml
cache:
quota: 80%
Default configuration
---------------------
The default BuildStream configuration is specified here for reference:
......
kind: script
depends:
- filename: base.bst
type: build
- filename: script/corruption-image.bst
type: build
config:
commands:
- echo smashed >>/canary
kind: compose
depends:
- filename: base.bst
type: build
public:
bst:
split-rules:
remove:
- "/tmp/**"
- "/tmp"
kind: filter
depends:
- filename: script/marked-tmpdir.bst
type: build
config:
exclude:
- remove
include-orphans: True
kind: script
depends:
- filename: script/no-tmpdir.bst
type: build
config:
commands:
- |
mkdir -p /tmp/blah
......@@ -184,3 +184,41 @@ def test_regression_cache_corruption(cli, tmpdir, datafiles):
with open(os.path.join(checkout_after, 'canary')) as f:
assert f.read() == 'alive\n'
@pytest.mark.datafiles(DATA_DIR)
def test_regression_tmpdir(cli, tmpdir, datafiles):
project = str(datafiles)
element_name = 'script/tmpdir.bst'
res = cli.run(project=project, args=['build', element_name])
assert res.exit_code == 0
@pytest.mark.datafiles(DATA_DIR)
def test_regression_cache_corruption_2(cli, tmpdir, datafiles):
project = str(datafiles)
checkout_original = os.path.join(cli.directory, 'checkout-original')
checkout_after = os.path.join(cli.directory, 'checkout-after')
element_name = 'script/corruption-2.bst'
canary_element_name = 'script/corruption-image.bst'
res = cli.run(project=project, args=['build', canary_element_name])
assert res.exit_code == 0
res = cli.run(project=project, args=['checkout', canary_element_name,
checkout_original])
assert res.exit_code == 0
with open(os.path.join(checkout_original, 'canary')) as f:
assert f.read() == 'alive\n'
res = cli.run(project=project, args=['build', element_name])
assert res.exit_code == 0
res = cli.run(project=project, args=['checkout', canary_element_name,
checkout_after])
assert res.exit_code == 0
with open(os.path.join(checkout_after, 'canary')) as f:
assert f.read() == 'alive\n'
from hashlib import sha256
import os
import pytest
import random
import tempfile
from tests.testutils import cli
from buildstream.storage._casbaseddirectory import CasBasedDirectory
from buildstream.storage._filebaseddirectory import FileBasedDirectory
from buildstream._artifactcache import ArtifactCache
from buildstream._artifactcache.cascache import CASCache
from buildstream import utils
# These are comparitive tests that check that FileBasedDirectory and
# CasBasedDirectory act identically.
class FakeArtifactCache():
def __init__(self):
self.cas = None
class FakeContext():
def __init__(self):
self.artifactdir = ''
self.artifactcache = FakeArtifactCache()
# This is a set of example file system contents. It's a set of trees
# which are either expected to be problematic or were found to be
# problematic during random testing.
# The test attempts to import each on top of each other to test
# importing works consistently. Each tuple is defined as (<filename>,
# <type>, <content>). Type can be 'F' (file), 'S' (symlink) or 'D'
# (directory) with content being the contents for a file or the
# destination for a symlink.
root_filesets = [
[('a/b/c/textfile1', 'F', 'This is textfile 1\n')],
[('a/b/c/textfile1', 'F', 'This is the replacement textfile 1\n')],
[('a/b/d', 'D', '')],
[('a/b/c', 'S', '/a/b/d')],
[('a/b/d', 'S', '/a/b/c')],
[('a/b/d', 'D', ''), ('a/b/c', 'S', '/a/b/d')],
[('a/b/c', 'D', ''), ('a/b/d', 'S', '/a/b/c')],
[('a/b', 'F', 'This is textfile 1\n')],
[('a/b/c', 'F', 'This is textfile 1\n')],
[('a/b/c', 'D', '')]
]
empty_hash_ref = sha256().hexdigest()
RANDOM_SEED = 69105
NUM_RANDOM_TESTS = 10
def generate_import_roots(rootno, directory):
rootname = "root{}".format(rootno)
rootdir = os.path.join(directory, "content", rootname)
if os.path.exists(rootdir):
return
for (path, typesymbol, content) in root_filesets[rootno - 1]:
if typesymbol == 'F':
(dirnames, filename) = os.path.split(path)
os.makedirs(os.path.join(rootdir, dirnames), exist_ok=True)
with open(os.path.join(rootdir, dirnames, filename), "wt") as f:
f.write(content)
elif typesymbol == 'D':
os.makedirs(os.path.join(rootdir, path), exist_ok=True)
elif typesymbol == 'S':
(dirnames, filename) = os.path.split(path)
os.makedirs(os.path.join(rootdir, dirnames), exist_ok=True)
os.symlink(content, os.path.join(rootdir, path))
def generate_random_root(rootno, directory):
# By seeding the random number generator, we ensure these tests
# will be repeatable, at least until Python changes the random
# number algorithm.
random.seed(RANDOM_SEED + rootno)
rootname = "root{}".format(rootno)
rootdir = os.path.join(directory, "content", rootname)
if os.path.exists(rootdir):
return
things = []
locations = ['.']
os.makedirs(rootdir)
for i in range(0, 100):
location = random.choice(locations)
thingname = "node{}".format(i)
thing = random.choice(['dir', 'link', 'file'])
target = os.path.join(rootdir, location, thingname)
if thing == 'dir':
os.makedirs(target)
locations.append(os.path.join(location, thingname))
elif thing == 'file':
with open(target, "wt") as f:
f.write("This is node {}\n".format(i))
elif thing == 'link':
symlink_type = random.choice(['absolute', 'relative', 'broken'])
if symlink_type == 'broken' or not things:
os.symlink("/broken", target)
elif symlink_type == 'absolute':
symlink_destination = random.choice(things)
os.symlink(symlink_destination, target)
else:
symlink_destination = random.choice(things)
relative_link = os.path.relpath(symlink_destination, start=location)
os.symlink(relative_link, target)
things.append(os.path.join(location, thingname))
def file_contents(path):
with open(path, "r") as f:
result = f.read()
return result
def file_contents_are(path, contents):
return file_contents(path) == contents
def create_new_casdir(root_number, fake_context, tmpdir):
d = CasBasedDirectory(fake_context)
d.import_files(os.path.join(tmpdir, "content", "root{}".format(root_number)))
assert d.ref.hash != empty_hash_ref
return d
def create_new_filedir(root_number, tmpdir):
root = os.path.join(tmpdir, "vdir")
os.makedirs(root)
d = FileBasedDirectory(root)
d.import_files(os.path.join(tmpdir, "content", "root{}".format(root_number)))
return d
def combinations(integer_range):
for x in integer_range:
for y in integer_range:
yield (x, y)
def resolve_symlinks(path, root):
""" A function to resolve symlinks inside 'path' components apart from the last one.
For example, resolve_symlinks('/a/b/c/d', '/a/b')
will return '/a/b/f/d' if /a/b/c is a symlink to /a/b/f. The final component of
'path' is not resolved, because we typically want to inspect the symlink found
at that path, not its target.
"""
components = path.split(os.path.sep)
location = root
for i in range(0, len(components) - 1):
location = os.path.join(location, components[i])
if os.path.islink(location):
# Resolve the link, add on all the remaining components
target = os.path.join(os.readlink(location))
tail = os.path.sep.join(components[i + 1:])
if target.startswith(os.path.sep):
# Absolute link - relative to root
location = os.path.join(root, target, tail)
else:
# Relative link - relative to symlink location
location = os.path.join(location, target)
return resolve_symlinks(location, root)
# If we got here, no symlinks were found. Add on the final component and return.
location = os.path.join(location, components[-1])
return location
def directory_not_empty(path):
return os.listdir(path)
def _import_test(tmpdir, original, overlay, generator_function, verify_contents=False):
fake_context = FakeContext()
fake_context.artifactcache.cas = CASCache(tmpdir)
# Create some fake content
generator_function(original, tmpdir)
if original != overlay:
generator_function(overlay, tmpdir)
d = create_new_casdir(original, fake_context, tmpdir)
duplicate_cas = create_new_casdir(original, fake_context, tmpdir)
assert duplicate_cas.ref.hash == d.ref.hash
d2 = create_new_casdir(overlay, fake_context, tmpdir)
d.import_files(d2)
export_dir = os.path.join(tmpdir, "output-{}-{}".format(original, overlay))
roundtrip_dir = os.path.join(tmpdir, "roundtrip-{}-{}".format(original, overlay))
d2.export_files(roundtrip_dir)
d.export_files(export_dir)
if verify_contents:
for item in root_filesets[overlay - 1]:
(path, typename, content) = item
realpath = resolve_symlinks(path, export_dir)
if typename == 'F':
if os.path.isdir(realpath) and directory_not_empty(realpath):
# The file should not have overwritten the directory in this case.
pass
else:
assert os.path.isfile(realpath), "{} did not exist in the combined virtual directory".format(path)
assert file_contents_are(realpath, content)
elif typename == 'S':
if os.path.isdir(realpath) and directory_not_empty(realpath):
# The symlink should not have overwritten the directory in this case.
pass
else:
assert os.path.islink(realpath)
assert os.readlink(realpath) == content
elif typename == 'D':
# We can't do any more tests than this because it
# depends on things present in the original. Blank
# directories here will be ignored and the original
# left in place.
assert os.path.lexists(realpath)
# Now do the same thing with filebaseddirectories and check the contents match
files = list(utils.list_relative_paths(roundtrip_dir))
duplicate_cas._import_files_from_directory(roundtrip_dir, files=files)
duplicate_cas._recalculate_recursing_down()
if duplicate_cas.parent:
duplicate_cas.parent._recalculate_recursing_up(duplicate_cas)
assert duplicate_cas.ref.hash == d.ref.hash
# It's possible to parameterize on both original and overlay values,
# but this leads to more tests being listed in the output than are
# comfortable.
@pytest.mark.parametrize("original", range(1, len(root_filesets) + 1))
def test_fixed_cas_import(cli, tmpdir, original):
for overlay in range(1, len(root_filesets) + 1):
_import_test(str(tmpdir), original, overlay, generate_import_roots, verify_contents=True)
@pytest.mark.parametrize("original", range(1, NUM_RANDOM_TESTS + 1))
def test_random_cas_import(cli, tmpdir, original):
for overlay in range(1, NUM_RANDOM_TESTS + 1):
_import_test(str(tmpdir), original, overlay, generate_random_root, verify_contents=False)
def _listing_test(tmpdir, root, generator_function):
fake_context = FakeContext()
fake_context.artifactcache.cas = CASCache(tmpdir)
# Create some fake content
generator_function(root, tmpdir)
d = create_new_filedir(root, tmpdir)
filelist = list(d.list_relative_paths())
d2 = create_new_casdir(root, fake_context, tmpdir)
filelist2 = list(d2.list_relative_paths())
assert filelist == filelist2
@pytest.mark.parametrize("root", range(1, 11))
def test_random_directory_listing(cli, tmpdir, root):
_listing_test(str(tmpdir), root, generate_random_root)
@pytest.mark.parametrize("root", [1, 2, 3, 4, 5])
def test_fixed_directory_listing(cli, tmpdir, root):
_listing_test(str(tmpdir), root, generate_import_roots)
......@@ -27,4 +27,5 @@ def test_parse_size_over_1024T(cli, tmpdir):
patched_statvfs = mock_os.mock_statvfs(f_bavail=bavail, f_bsize=BLOCK_SIZE)
with mock_os.monkey_patch("statvfs", patched_statvfs):
result = cli.run(project, args=["build", "file.bst"])
assert "1025T of available system storage" in result.stderr
failure_msg = 'Your system does not have enough available space to support the cache quota specified.'
assert failure_msg in result.stderr