Support multiple runners sections in the configuration template
Status Update: 2025-07-21
We recommend using Operator for your current runner installation needs as we've recently resolved several Operator issues and remain committed to ongoing maintenance and improvements. Our confidence in Operator comes from using it internally to manage our own Runner CI/CD workloads, which gives us direct insight into its performance and reliability. Operator fully supports the new auth tokens and works seamlessly with them—in fact, our own Runner Managers are deployed through Operator using auth tokens, which gives us direct insight into both the platform's performance and the auth token implementation.
Operator will deploy a separate pod for each manager instance, which effectively supports separate configurations as previously requested. The only consideration is that this approach doesn't minimize the number of deployments—which was the original goal outlined in this issue—since each manager requires its own pod.
This means we will not implement a solution to the problem as initially outlined, but instead recommend that customers use the Operator and our example deployment patterns to address their use cases.
Overview
Customers using the Runner on K8s need to be able to easily install multiple Runners on the K8S coordinator pod to efficiently use the cluster resources.
Context
"The gitlab-runner is able to manage multiple [[runners]]
at once. Having a single instance of gitlab-runner for many [[runners]]
will help to optimize the usage of the K8S cluster: a single gitlab-runner knows exactly how many jobs are currently running, even across multiple registrations.
Here is the scenario: I have a single K8S cluster and I wish to offer a gitlab-runner for 3 groups of projects.
Manually, I would run a single gitlab-runner, with a concurrency=10 (for any reason, I know that 10 jobs max is good for the cluster). And then, the gitlab-runner will pick the jobs from the 3 groups, up to 10 jobs in parallel. It could be 6 of the first group, 3 of the second and one of the third.
Using Gitlab Runner Operator or the current Helm Chart, I cannot optimize like this. I have to choose how to split the "10 jobs" limit.
If I instantiate 3 gitlab-runner with concurrency=10 each, it could be possible that 30 jobs are picked by the runners, and the cluster will not be able to support them.
If I instantiate 3 gitlab-runner with concurrency=3 each, if only one group has more than 3 jobs to process at a given time, they will be queued while the cluster has remaining slots.
PS : For such design, think of the ingress-nginx controller: a controller with a single nginx instance, managing multiple Kind=Ingress in order to generate a single config file. I think it is clearly possible to do the same think here."
Proposal
Today, either with Helm or Operator we only support installing one Runner coordinator per a K8S coordinator pod. With this feature, then users can define multiple runner sections in the configuration template.
Details for co-create - revised (2024-10-21)
Open technical implementation questions
-
Do we need to support one or many authentication tokens? The current thinking is to support multiple authentication tokens as the registration process does not allow using the same token on the same runner host for multiple runner entries.
-
Do we extend the current runner registration command
gitlab-runner register
with flag —template-config, by improving the configuration management template?- Right now we explicitly generate a failure if there is more than one runner defined in the template.
-
Assuming we go with the option to extend the current runner registration command then we need to support the global parameters which are configurable? The question now is - how do you handle conflict values?
-
Need to validate that the
Mergo
library that we use for merging can support the following options:- Override the current values with the newly added values.
- No override
Note: the reason for both
override
andno override
capabilities is to allow the user to configure the merge the way they would like. Therefore, the idea is that we offer a default value so as a user you either use the default value or you have a simple option to override the default values set in the template. -
Matching - which fields to use to match the unique sections to be updated?
-
Do we want to support multiple executors? The current thinking is that we only support one executor defined in the config.toml when using this feature.
The idea behind this solution is that it will be implemented in runner core then it is automatically supported by both Helm and Operator.
Implementation details
Note - the first task is to define the implementation details. The open questions listed above are meant to guide a developer in defining the solution. For reference, there is a related technical discussion here, where several of these questions are discussed.
- {placeholder}
Estimated timeline for co-create
Task | Estimated duration | Owner |
---|---|---|
Write implementation proposal. | 2 wks | Assigned customer developer. |
Review and approve implementation proposal. | 4 wks | Runner core engineers |
Write code + review and approval of MR's. | 8 - 12 wks | Assigned customer developer + Runner core engineers |