Skip to content

Feature/602 create setup performance testing

What does this MR do and why?

See #602 (closed). It does not completely implement the issue, but it is a first step that will allow progress for #595 (closed) and a necessary step for adding performance comparisons in the pipelines.

This merge requests adds a series of benchmarks under tests/benchmarks using the pytest-benchmark plugin. These are based on all the jupyter-execute blocks in the examples in docs/source/examples rst files. I extracted the code with a script and removed all the plotting/printing and otherwise superfluous code. I hope this covers the important parts of the cython code base.

You can try out the bench-marking locally with:

pytests --benchmark-only --benchmark-autosave tests

Normal tests runs now need the --benchmark-skip flag to avoid always running them (they take 90 seconds on my laptop).

The benchmarks are now being ran in the pipelines in tests/performance and produce an artifact: a .benchmarks folder with a subfolder per architecture/python version containing the json file with the benchmarking timings. Currently this benchmark artifacts are being collected in the new https://gitlab.com/ifosim/finesse/finesse3-benchmark, for further analysis

Edited by Miron van der Kolk

Merge request reports