Replace run.sh with a Python Script
This represents a possible response to #103: making the run script more ergonomic and usable.
Rather than try to make too many changes in one go, this MR mostly tries to recreate the existing functionality of run.sh. However, that functionality is now written as a Python package, and there are classes to represent Applications (clients, servers, assets), the matrix file, and Jobs, which puts us in a strong position to add more functionality in future.
Usage
From the top level directory:
- Install the package with
python3 -m install . - Run a test with
remoteAPIsTesting [-m matrix_file] -s server -c client [-a asset] run [-r results/] [-u $CI_JOB_URL]- For example:
remoteAPIsTesting -m matrix.yml -s buildbarn -c bazel -a remote_asset run -r results/
- For example:
(Note: the script now uses the application name as an argument (eg bazel) rather than using a docker-compose filename (eg docker-compose-bazel.yml).
Differences
- The new script reads information from
matrix.ymland combines this with the docker-compose jinja templates to run the tests. This means it is no longer necessary to generate thedocker-compose-${name}.ymlfiles in order to run the tests. Indeed, if those files are generated, then the script won't do anything with them, and editing them won't have any effect on the test. - The script doesn't take a job name argument for naming the results file. Instead, the filename is auto-generated based on the sever name, client name, etc. The user specifies a results directory with -r, and the files are saved there. If no directory is specified, then results are not saved.
- The JSON results summary is now always printed to stdout. (In addition to being saved to file if -r is used).
- The script does not need to be run in the docker-compose directory. In fact, the default argument for
-mismatrix.yml, implying that the default location to run it from is the top level directory of the repository.- However, the actual tests are still run from the docker-compose directory (This is used as the working directory when
subprocessruns docker-compose commands). This working directory is now defined in the matrix file (with a path that is defined relative to the matrix file's location).
- However, the actual tests are still run from the docker-compose directory (This is used as the working directory when
Additional commands
In addition to run which runs the test, I've added two more subcommands, to demonstrate the sort of extra functionality we could build onto this base. (These could be moved to a separate MR if it will help with review).
show-matrix
remoteAPIsTesting [-m matrix.yml] show-matrix [-v]
Lists all the applications defined in matrix.yml. Has a -v option to provide more detail on each one. Note that -m, -s and -c options all have no effect on this subcommand.
compose-yaml
remoteAPIsTesting [-m matrix.yml] [-s server]* [-c client]* [-a asset]* compose-yaml
Generates the docker-compose-yaml for a given set of applications, either sending it to stdout, or saving each one to a file.
Note: this command demonstrates that the script doesn't always have to be called with exactly one server and exactly one client. For instance: remoteAPIsTesting -s buildbarn -s buildfarm -s buildgrid compose-yaml is a valid command, and will produce the docker-compose-yaml files for all three servers.
Next steps
If we proceed with this approach, there are some obvious paths to continue improving the script:
- Subdivide the applications in matrix.yml into clients, servers and assets. (Currently the groups are clearly separated and labelled by the comments, but not by the actual yaml data.)
- We can then use this information programmatically in the script. e.g.
remoteAPIsTesting -c * show-matrixmight be the command to list all clients.
- We can then use this information programmatically in the script. e.g.
- Support using local docker-compose-yaml files, alongside the applications defined in matrix.yml. (Would help when users are experimenting with a new application, or want to modify the docker-compose yaml for quick testing and improving.)
- Support running multiple tests from one command. This wouldn't be hard to implement if the jobs are run in series.
- Running jobs in parallel. This is a significantly bigger challenge. We would need to be able to spin up multiple instances of each client, and each server, making sure that each one only communicates with the ones it is supposed to.