Allow GitLab Runner Helm Chart to specify a per-runner tag list
Description
When deploying GitLab Runner via a Helm chart, tags can (only, at the moment) be specified in the runners.tags value
However, it is possible, and even suggested to use configurations with "one runner and multiple workers"
As the distinction between "runner" and "worker" is not really clear, I'll add that I read the above link as "one install of the gitlab-runner product (runner), and multiple runners as registered by the GitLab instance (worker)"
When registering workers one at a time, this is not a problem, as each one of them gets added with its own set of labels
Let's now consider a values.yaml
file with the following content:
gitlabUrl: "gitlab.my-company.com"
runnerRegistrationToken: "my-instance-token"
runners:
name: "gitlab-runners-kubernetes"
tags: "kubernetes"
runUntagged: false
config: |
[[runners]]
name = "my-first-worker"
# Worker-specific memory limits, and other configuration
[runners.kubernetes]
privileged = true
[[runners]]
name = "my-second-worker"
# Worker-specific memory limits, and other configuration
[runners.kubernetes]
privileged = true
We encounter the following problems:
- It is impossible to set the
runners.tags
andrunners.runUntagged
values on a per-worker basis. This makes it impossible to register, say 3 workers, with different memory limits and different tags, all available at the instance-level for the projects to pick between - The
runners.name
(which is really the worker name) andrunnerRegistrationToken
value acts as duplicates of respectivelyrunners[].name
andrunners[].token
in theconfig.toml
inline file. Other duplicates such as these ones have been deprecated, yet these 2 have not, causing undocumented behavior if both are set (what is the order of precedence ?)
Available workarounds
As far as I know, there are only 2 available workarounds
- Install multiple releases of the GitLab Runner Helm Chart (as it is the only way to register new workers one at a time using Helm). This is obviously unwanted, the whole idea being to have a single point of management for our different runner configurations
- Tag the workers after the install. This is doable, though unnecessarily complicated. If we do not add tags and set the workers not to run untagged jobs, they are essentially stale until tagged. However this adds a step to configuration and reduces the overall ease to automatically deploy and manage a runner fleet
Possible fixes
- Make
runners.tags
andrunners.runUntagged
per-worker. This is not trivial: as per the Helm values deprecation issue, theyaml
part is unaware of how many workers are configured in the inlinedconfig.toml
- Essentially going the complete opposite way of the previously-mentionned deprecation, and mapping all the
config.toml
values to new entries in thevalues.yaml
file. This solves the problem of the above fix as thevalues.yaml
file would now be aware of how many workers are configured ; however this makes configuration via Secret or ConfigMap harder - Something in-between, such as removing the
runners.name
YAML value, keeping only the one in theconfig.toml
inline file, and providing a way to map names with tags in thevalues.yaml
file. This is, from an outside perspective, the easiest to implement and most in line conceptually with the deprecation issue
Edited by Clovis Dugué