Loading
Commits on Source 53
-
Daniel P. Berrangé authored
Installing glib-utils on macOS results in a warning: $ brew install ccache cppi gettext git glib glib-utils gobject-introspection gtk-doc libvirt libxml2 make meson ninja pkg-config vala Warning: Use glib instead of deprecated glib-utils This change follows its advice. Signed-off-by:Daniel P. Berrangé <berrange@redhat.com>
-
Eric Blake authored
nbdkit finally enabled Cirrus CI testing, which exposed several stale mappings detected by homebrew: brew install autoconf automake bash bash-completion ccache cpanminus curl e2fsprogs expect flake8 git gnutls golang gzip jq libssh libtool libtorrent libvirt lua make ocaml perl perl5-devel pkg-config python3 python3-boto3 python3-libnbd qemu rust sfdisk socat tcl xorriso xz zlib zstd Warning: No available formula with the name "libtorrent". Did you mean rtorrent? ==> Searching for similarly named formulae and casks... ==> Formulae libtorrent-rakshasa libtorrent-rasterbar rtorrent ... Warning: No available formula with the name "perl5-devel". Did you mean perl-build?' ... Warning: No available formula with the name "python3-boto3". Did you mean python-yq? ... Warning: No available formula with the name "python3-libnbd". Did you mean python-build? ... Warning: No available formula with the name "sfdisk". Looks like homebrew dropped bare 'libtorrent' in 2016 in favor of the two explicit fork names. It seems that rasterbar has more features of the two forks, but Fedora is based on rakshasa; that said, nbdkit's configure.ac currently checks PKG_CHECK_MODULES([LIBTORRENT], [libtorrent-rasterbar]...), and I have not tested whether rakshasa works in its place. Meanwhile, 'pkg-config --cflags libtorrent-rasterbar' does not include the necessary -I/path/to/boost so compilation fails with missing <boost/config.hpp>; easiest is to just skip the torrent plugin on MacOS. As for perl5-devel, it looks like bare 'perl' on homebrew installs enough to build C bindings against perl without needing a second package; nbdkit used both 'perl' and 'perl-devel', but a project using only 'perl-devel' will now get the same effect. And for boto3, sfdisk, and libnbd, it looks like homebrew has not picked these up yet; although boto3 can come in through pip. For tcl, it looks like homebrew prefers the longer name tcl-tk. Signed-off-by:Eric Blake <eblake@redhat.com>
-
Daniel P. Berrangé authored
This reverts commit ff239cf5. capstone was unretired in EPEL-9[1], so we can go back to relying on the regular repos instead of downloading it. [1] https://bugzilla.redhat.com/show_bug.cgi?id=2124181#c11 Signed-off-by:
Daniel P. Berrangé <berrange@redhat.com>
-
Erik Skultety authored
Ansible introduced some API changes in ansible-runner 2.0, luckily none of them backwards incompatible. However, Ansible is planning on deprecating Python 3.7 starting with version 2.12. Here's the weird problem for us - the deprecation warning Ansible produces here is a multiline string which is delivered in multiple Ansible events, so our ugly WARNING filter can't work properly. By moving onto ansible-runner 2.1.1 and newer, we're able to get rid of that ugly piece of code because ansible-runner now provides attributes for both stdout and stderr streams. As for API changes, the biggest change is the introduction of the interface module providing a new 'run' method. The old Runner.run method is essentially just a wrapper over the new API. Yet we need to keep using the Runner object (which is fine) directly, otherwise we'd lose some logging details about the actual command Ansible ran, because the new APIs simply don't provide that information. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
We never officially had a minimum Python version that was required for this project, but we always silently assumed 3.6. As 3.6 has now been EOL'd for quite and given that 3.7 will be EOL'd in a few months and because of general availability of 3.8 that's what we'll make our target. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
These configs are all VM installation related, so it makes logical sense to put the data resources under the lcitool.install subpackage. This change will make transition from 'pkg_resources' to 'importlib.resources' more straightforward. If the need for global configs pops up, we can have another configs dir on the global level again. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
Complete the move of 'lcitool/configs' installation configs under the lcitool.install package by moving the cloud-init.conf (still only) example config to the same directory for consistency. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
importlib.resources.files was introduced in Python 3.9. However, we support Python 3.8+ which means that we need to add a compatibility helper to query package resources properly. Normally we'd just go with importlib.resources.path, but that API is lacks any kind of practicality as it only accepts filenames as a resource, rejecting relative paths, strings with directory delimiters or even directories. So, if we want to iterate over a bunch of data files we ship along with lcitool, we have to use other means to do that - get the imported package directory and join that path with whatever we're looking for. This solution doesn't work with resources not physically on the file system, IOW with resources shipped as a ZIP which isn't our case anyway. Note that this patch can be reverted once we move onto 3.9+ in the future. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
According to setuptools documentation [1], any new usage of pkg_resources for package discovery of resources is deprecated in favour of importlib.resources. This patch follows the recommendation. [1] https://setuptools.pypa.io/en/latest/pkg_resources.html [2] https://importlib-resources.readthedocs.io/en/latest/migration.html#pkg-resources-resource-filename Signed-off-by:
Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
The flat project hierarchy allows python to run the project directly with 'python3 -m' and Python will automatically import the package. However, that way it'll run the project via the __main__.py module which in turn will set argv[0] to __main__. Setting the program name explicitly in argparse is useful nonetheless. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
The thing with transitioning from setup.py to pyproject.toml build strategy is that we now must specify a list of all entry points for the project, IOW whatever is going to become the CLI executable. With setup.py we got away by having a bin/lcitool script which would get installed directly, but that's not how setuptools builds work after PEP 517 [1]. The build backend now creates the executables automatically as wrappers over the entry points that are required to be listed in the project's build config. There's one more requirement - the entry point needs to be importable through Python's standard means. Since bin/lcitool is not part of the lcitool package, hence not easily importable as a module, we need to have its logic elsewhere. One of the standard ways of doing that is by creating a __main__.py module which python also automatically looks for when instructed to run the project directly from its repo. [1] https://peps.python.org/pep-0517/ Signed-off-by:
Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
Everything that we want included will be included by default by means of pyproject.toml, but we there are files we don't want to distribute, e.g CI-related files, hidden repo files, etc. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
Replace bare setup.py. The required version of setuptools set by this patch is not a random version number - it is the version where setuptools' pyproject.toml parsing gained support for recursive globbing of package data without which we would not be able to enumerate package data as simply as "<package>/datadir/**". Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
Now that we're fully compliant with using pyproject.toml for the build backend, we can simplify the script's logic to a mere import, similarly to what Python would do if the lcitool package was actually installed. This patch includes one important change which could not have been performed before switching to pyproject.toml from setup.py - it always overrides PYTHONPATH because our bin/lcitool is no longer being installed along with the package, because the build backends relying pyproject.toml as build input generate an implicit bin/lcitool script wrapping whatever API entrypoint we specified in the package installation location. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
Since we moved to pyproject.toml in terms of providing packaging information about the project, we can drop setup.py completely now. Given the combination of Python 3.8+ and setuptools >62.3 means we don't need any legacy way of building with setup.py. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
If one tries to combine a class-scoped or a module-scoped fixture that happens to depend on monkeypatch, this is what they'll get: "You tried to access the function scoped fixture monkeypatch with a module scoped request object" Sometimes one wants to monkeypatch to carry over the override to further test functions and that's where fixture scopes come in. To remediate the issue, introduce our own global fixture helpers with conforming to different scopes. At the same time, apply the improvement to test_containers.py. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Martin Kletzander authored
Signed-off-by:Martin Kletzander <mkletzan@redhat.com>
-
Erik Skultety authored
Turns out, PEP 517 [1] didn't say anything about editable installs with pyproject.toml based builds. That's what was covered by PEP 660 [2], which however, isn't supported in setuptools < 64.0.0. Our current setuptools requirement is >62.3, but it'll take time before v64.0.0 will be widely available outside of PyPI. Therefore, introduce a simple legacy setup.py which solves this for the time being. This commit partially reverts b1562114. [1] https://peps.python.org/pep-0517/ [2] https://peps.python.org/pep-0660/ Signed-off-by:
Erik Skultety <eskultet@redhat.com>
-
Instead of doing os.environ["HOME"] use pathlib's native way of resolving the home directory: Path.home() The reason for this is that in test environments "HOME" might not be set which results in a vague "KeyError: 'HOME'" error. If pathlib fails to resolve home, it raises 'RuntimeError: Can't determine home directory'. The benefit is that instead of looking at $HOME pathlib tries to use os.expanduser() to construct the home path which is more robust and the error is also more verbose than a plain "KeyError". Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
When subclassing and overriding the constructor of a class, unless we're providing a completely new implementation, it's always good practice to call the parent's class constructor as well. In this case it may lead to some weird behaviour where the "message" attribute isn't picked up properly when printing the exception at the top level (in this case in application.py) Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
When installing from a URL, we relied on the hardcoded combinations of unattended install schemes and their corresponding targets. The problem with this approach is that even though we now support adding new targets or target overrides using the data dir one cannot actually install a VM instance of that target unless the OS name is something we already know about in upstream. It may sound like adding the new target to the upstream codebase would be the way to go instead, but if you consider the target being RHEL which, at least from unattended install's perspective, cannot be simply added to the list of supported targets, we need to look for a different mechanism to support downstream use cases involving lcitool and these enterprise distros, provided the user has access to repositories and os install trees. This patch solves the problem by introducing a new attribute 'install.unattended_scheme' in the target's YAML profile. This will then determine the name of the corresponding install config file to be injected. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
When trying out tox and building the sdist afterwards resulted in a bunch of Python's runtime compile artifacts to be included in the tarball. This is because "recursive-include" is used on tests. Make sure we globally exclude the artifacts before packaging Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Daniel P. Berrangé authored
The Cirrus CI jobs are passed CI_REPOSITORY_URL and CI_COMMIT_REF_NAME variables. The former is the repository against which the job is being run and the latter is the branch name of the code to test. The latter, however, is only valid when testing already merged code either in upstream or in a fork. If testing a merge request then CI_REPOSITORY_URL refers to the upstream repo, while CI_COMMIT_REF_NAME refers to the branch in the contributor's fork. Cirrus CI is of course unable to fetch CI_COMMIT_REF_NAME from upstream since it is the wrong repository. Fortunately the commits assocaited with any merge request are also copied into upstream and made available at merge-requests/$NUM/head. This ref path is present in the CI_MERGE_REQUEST_REF_PATH env variable, and so this should be exposed to Cirrus CI too. Signed-off-by:Daniel P. Berrangé <berrange@redhat.com>
-
Andrea Bolognani authored
Instead of using inline string interpolation, follow the same varmap-based approach that's used for cross_meson. Signed-off-by:Andrea Bolognani <abologna@redhat.com>
-
Andrea Bolognani authored
Portably using echo when escape sequences are involved is more involved than one would expect. In particular, the shell that is used to run the generation step as part of 'docker build' is not guaranteed to be the same that is used to invoke the install_buildenv function. For example, bash's echo builtin doesn't handle escape sequences by default but can be told to do so by passing the -e option; dash's version of echo, on the other hand, defaults to handling escape sequences, but when passed the -e option will simply consider it part of the input and print it back along with the rest. bash's behavior results in broken files, with all contents in a single line, getting generated as part of $CROSS-$NAME-local-env jobs. To solve the issue, use printf instead instead of echo. This command works consistently across shells, and does the right thing without needing to pass additional flags. Signed-off-by:Andrea Bolognani <abologna@redhat.com>
-
Andrea Bolognani authored
Short for "environment identifier", it's a combination of the target name and, if applicable, the cross-build architecture. Right now we're only using it to build the image name, so all we're gaining is the removal of a tiny amount of duplication. It will become much more important in a second. Signed-off-by:Andrea Bolognani <abologna@redhat.com>
-
Andrea Bolognani authored
Using $NAME is fine when performing a native build, but when cross-building we need to use $NAME-cross-$CROSS instead, or we will pick up the wrong file and things will not work. This currently affects all $CROSS-$NAME-local-env jobs, which end up performing a native build instead of the expected cross-build. Signed-off-by:Andrea Bolognani <abologna@redhat.com>
-
Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
The idea is to be able to compare all datatypes we produce in a generic manner interchangeably. In other words, we need to be able to compare lists, dicts, strings, files or even strings with files transparently to the caller. How are we going to do that? Since our native serialization format is YAML, we'll serialize everything apart from raw strings to YAML and compare those the same way as if two files were compared using diff. To implement the idea, difflib, which is part of the standard library, is used. The interface to compare the actual and expected outputs was borrowed from Python sets and so follows the "A.diff(B)" mantra where A is the actual produced output and B is the expected one. Both need to be of type DiffOperand (introduced by this patch) which takes care of any conversion needed. Unlike sets though, we can't use this the other way around because then we'd get difflib's uninified_diff confused on which is which. Not that it wouldn't work, it's just the final diff might be confusing. Sadly, there's no way to enforce this usage though. The nice thing about this patch is that we'll now get something like this if the outputs don't match: --- actual +++ expected @@ -48,7 +48,6 @@ - firewalld-filesystem - flake8 - flex -- foo - fuse - fuse3 - fusermount @@ -317,7 +316,6 @@ - xsltproc - xz - yajl -- zfoo - zip - zlib - zlib-static Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
So, this function will eventually replace the existing assert_equal function. However, unlike the original, this one is not to be used directly and instead will be wrapped by a global fixture introduced in a future patch. The reason for that is simple. To give the developer the most convenient output possible, we're going to dump both the actual and expected outputs to the disk in order to aid them with debugging problems. Where do we dump the files though? The natural answer is "wherever pytest would create them". Indeed pytest has the 'tmp_path_factory' fixture which can create directories on demand, but what is the name for such a directory? Pytest has another fixture for that - request. And now every single test has to depend on 2 fixtures to get the test's name and to create a temporary directory using that name to dump data in there on failure. A test writer should not be concerned by this and they should only ever care about comparing the actual and expected outputs, that's it. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
This fixture is going to be used by every single positive tests. Because it's global, it's going to be available everywhere, and because it return a higher order function, it can only accept arguments for the actual and expected outputs to be compared, everything else it transparent to the test writer. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
With all the 'diff' bits in place, we can now convert all but one (test_manifest) tests to using this new fixture. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Unlike with the other tests, this wasn't a completely straightforward conversion as the asserts are wrapped by multiple helpers which naturally don't have access to fixtures, so we need to pass the fixture over kwargs to the very leaf doing the actual equality check. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
With all tests converted to use the 'diff' style equality check, we don't need the original recursive assert helpers. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
We recently switched from plain setup.py setup to be fully compatible with PEP 517 relying solely on pyproject.toml (with setup.py reserved for legacy reasons). Installation is now only possible with pip, so our docs need an update. Fixes: #20 Signed-off-by:
Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
Without a section header, this information could be easily overlooked, so put it in a dedicated section. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Peter Krempa authored
'virt-manager' isn't hosted together with other libvirt-family projects and uses Github's workflows instead of the CI environment that we try to prepare here. If it were to adopt 'virt-manager' dependencies will need to be updated anyways. Signed-off-by:Peter Krempa <pkrempa@redhat.com>
-
Peter Krempa authored
Starting with OpenSUSE Leap 15.4 the package was renamed to match other distros. Use the old name only for old versions. Signed-off-by:Peter Krempa <pkrempa@redhat.com>
-
Peter Krempa authored
The 'rpm2cpio' package is part of the 'rpm' package which can be assumed to be present on RPM based distros. This fixes the build of the OpenSUSE Leap 15.4 container as they switched to use rpm-ndb (using 'ndb' as their database backend) instead of "original" rpm. Either way the required rpm2cpio program is present. Signed-off-by:Peter Krempa <pkrempa@redhat.com>
-
Peter Krempa authored
We generate the dockerfiles but don't test them yet. Signed-off-by:Peter Krempa <pkrempa@redhat.com>
-
Erik Skultety authored
When commit f0cc11e3 introduced the module, it did it so that: 1) it complies with PEP 517 [1] 2) lcitool could be executed as a Python module The implementation is based on having a __main__.py module which Python can automatically import. The problem is that when the main function is imported from the module from within the automatic executable wrapper created when the package is installed, it executes the whole module on import (standard behaviour). At the end of the module there's an unguarded call to the main() function. Fix this with the '__name__ == "__main__"' idiom. Fixes: f0cc11e3 [1] https://peps.python.org/pep-0517/ Signed-off-by:
Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
EPEL 9 has added a couple of packages over time, so we can now drop our masking for these. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Commit 84b2d110 did explain why the file was added but forgot to add a note inside the file itself, so that the note can keep staring at us. Signed-off-by:
Erik Skultety <eskultet@redhat.com>
-
OpenSUSE 15.3 is EOL and will be deleted in following patches. Signed-off-by:Peter Krempa <pkrempa@redhat.com>
-
15.3 is EOL and will be removed in upcoming patches. Switch the tests to use 15.4 Signed-off-by:Peter Krempa <pkrempa@redhat.com>
-
Signed-off-by:Peter Krempa <pkrempa@redhat.com>
-
Erik Skultety authored
This option was never used by the 'update' command, the option was always specific only to the 'build' command. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
Although this used to be useful back in the day when most of our listed projects (mostly libvirt though) were built in VMs, with the rise of git forge like services providing containers to build projects from sources this feature is became less useful. Although having become obsolete, the biggest problems of this feature were that: - the extra Ansible layer in form of playbooks abstracted useful bits like real-time stdout logging of the build process making debugging of a build issue more difficult than if having it done manually, especially considering the JSON serialized stderr dump it returns - project dependencies - in order to built project X, project Y had likely to be built prior to project X so that fresh builddeps were available at project X's build time. Not only the projects had to be named in the exact order they were to be built, but project Y could not even specify that it could be built against distro provided builddeps if so desired. - slight changes to the build procedure or tool chains used in the respective upstream projects. Should a build happen in such a VM, each and every project is responsible to maintain their own build recipes which the user can happily execute interactively inside an lcitool-prepared VM. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
With the 'build' command gone, we can now safely drop all Ansible 'build'-related artifacts. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
This was a feature only used with the build playbooks using Ansible. Now with the build playbook functionality gone, we can drop this code as well. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Erik Skultety authored
The project 'build' support was dropped by previous patches. Signed-off-by:Erik Skultety <eskultet@redhat.com>
-
Paolo Bonzini authored
Just like Alma Linux only has the "9" version number, only use "15" in the target name for OpenSUSE Leap. The minor version is only needed in the FROM line of the dockerfiles and for the installation URLs. Signed-off-by:Paolo Bonzini <pbonzini@redhat.com>