Install runner to project should disable shared runners?
Problem to solve
When installing a runner for a project, especially from a Kubernetes cluster, people expect the project to only use that runner, but if shared runners are configured for the instance, then jobs may run on either the project runner or the shared runner.
Further details
While a plain k8s runner won't behave any differently than a shared runner, shared runners do have some differences, like memory limits and variable queue delays. Presumably the queue delays are a non-issue since if the shared runners are delayed, then the project runner should pick up jobs first. But even if there's no real downside to leaving shared runners enabled, it feels wrong and unexpected. Especially when giving a demo, if you show a job log and notice that a shared runner picked up the job, it leads to confusion.
Proposal
- Automatically disabled shared runners when adding a project runner, at least from the cluster configuration page.
Note: a better option might be to prioritize the project runner over the shared runner, but leave the shared runner enabled just in case. Prioritizing runners might be really hard with our architecture.
What does success look like, and how can we measure that?
Users are no longer confused by how runners work.