Specify different nodeSelectors for gitlab-runner-pod and the spawned workers
As a DevOps-Engineer I want only the worker-pods on specific nodes, so I can scale the node-pool down to 0 if no builds are running.
Scenario
We habe a kubernetes-cluster with two node-pools. The first one has lower-priced machines and runs 24/7 and scales down to at least 1 node. The second has expensive machines with required ressources for faster builds. The second is the one, that should be used for gitlab-runner-jobs only. It should scale to 0 nodes if no gitlab-runner-pod (worker) is active.
Problem
The initial gitlab-runner-pod, that is registered with gitlab to spawn worker-pods is configured by the same nodeSelector and thus running on the expensive node preventing it from beeing scaled down.
Motivation
Saving Money (> $100 / Month)
Proposal
Currently there´s a config-property nodeSelector:
## Node labels for pod assignment
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
##
nodeSelector: {}
# Example: The gitlab runner manager should not run on spot instances so you can assign
# them to the regular worker nodes only.
# node-role.kubernetes.io/worker: "true"
All Pods of this Chart are deployed to the Nodes with the configured label.
There should be a second property e.g. nodeSelectorWorker: {}. In our scenario I could configure
nodeSelector: {
type = "small-node"
}
nodeSelectorWorker: {
type = "expensive-node"
}