refactor: improve code quality tools
Explanation of changes
Improving our general code quality pipeline/tooling.
Changes in tooling
- Greatly expanded the number of
ruffrules included- Removed per-file-ignores that were no longer applicable
- Removed unnecessary/outdated
# noqastatements (automatic) - Fixed (often accidental) blanket
# noqacomments, i.e. without a specific rule - Performed several "modernisation" changes, e.g. f-strings with conversion fields
'!'(automatic) - Added more parameter names to the ignore list of
pep8-namingand removed the corresponding# noqa - Sorted alphabetically entries in
__all__(automatic)
- Introduced the
typosspell checker tool to catch misspelled identifiers- Replaced all spelling errors reported by the tool
- Added several exceptions to the typos ignore list for technical terms
- Introduced the
commitizencommit message checker- Can be enabled by running the command:
pre-commit install --hook-type commit-msg - Commit messages should now follow the Conventional Commits standard, or
git commitwill fail
- Can be enabled by running the command:
New pipeline stage: Code quality report
There are some ruff rules that suggest good practices, but are either too opinionated, or too difficult to easily address in the current codebase; one example would be the McCabe complexity index. Unfortunately, ruff itself does not allow to selectively fail on certain rules and just "warn" on others: it's fail for all, or fail for none (--exit-zero).
Thus, I have devised a way to get around this using the --config CLI parameter to extend the rules specified in pyproject.toml with a set of rules I put under a custom section called [tool.ruff-warnings.lint] (it's not possible to reuse the ruff namespace). Inside .gitlab-ci.yml I wrote my own stage with a small python in-line script that loads up the "warning" rules into an environment variable. Then I run ruff and extend the default rules with the extra set; the pipeline fails when violations are found, but is allowed to fail.
The output is set to the gitlab format and saved to a json, which is then uploaded as a code quality artifact. This makes it so all the violations reported are visible both on the summary tab (below the Approve button), and on the diff tab next to the line number affected, by clicking on the small coloured shape (e.g. a triangle pointing down). These findings are to be considered as "you should eventually tackle this at some point" and are, of course, open to discussion.
ruff doesn't seem to have an option to give these findings a severity other than "Major" at this time.
Miscellaneous
- Reformatted
pyproject.tomlfile with two spaces per "tab" - Added the
**zhinst**glob to linting ignores to avoid touching anything since it's "frozen"
Potential improvements (could do after merge)
- Add a parse-and-replace step prior to uploading the code quality artifact, to customise the severity of violations.
- Add a script and/or manual pre-commit hook for
ruffruns with the "warnings" enabled. - Make the "warnings" violations visible as long output, not just as an artifact?
Motivation of changes
Discussed in a Quantify meeting, tracked by QTFY-702.
Merge checklist
See also merge request guidelines
-
Merge request has been reviewed (in-depth by a knowledgeable contributor), and is approved by a project maintainer. -
New code is covered by unit tests (or N/A). -
New code is documented and docstrings use numpydoc format (or N/A). -
New functionality: considered making private instead of extending public API (or N/A). -
Public API changed: added @deprecatedand entry in deprecated code suggestions (or N/A). -
Newly added/adjusted documentation and docstrings render properly (or N/A). -
Pipeline fix or dependency update: post in #software-for-developerschannel to mergemainback in or update local packages (or N/A). -
Tested on hardware (or N/A). -
CHANGELOG.mdfor breaking changes andAUTHORS.mdhave been updated (or N/A). -
Update Hardware backends documentation if backend interface change or N/A -
Check whether performance is significantly affected by looking at the Performance metrics results. -
Windows tests in CI pipeline pass (manually triggered by maintainers before merging). - Maintainers do not hit Auto-merge, we need to actively check as manual tests do not block pipeline
For reference, the issues workflow is described in the contribution guidelines.