Shard CI job VMs into different projects
What?
Configure the docker machine CI runner managers to provision job VMs in a different google project to the runner managers, with the job VMs themselves not all being in the same project any more either.
That statement doesn't prescribe exactly how the sharding should be done: per runner-manager, per class (private, shared, gitlab-shared), or something else entirely.
Why?
Anecdotes from those who've spoken to Google support about CI quotas indicate that they would prefer it if we sharded our usage into several projects. If we keep delaying this, we might end up in a tight spot regarding these quotas.
This might be a poorly-remembered anecdote, let me know if this reasoning is bogus.
The GCE quotas in question are regional, so we could get the same benefit by sharding into different regions. However, that has its own set of trade-offs, such as inter-region network charges and higher latency on CI<->application traffic. It might be worth discussing though.
Background
There is some back-of-a-napkin implementation plan in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/9051#note_282690623.
It's worth noting that this interacts poorly with Cloud NAT, as we'd have to ask Google to move our existing unused contiguous IPs from the "main" CI project to any others if we want machines there to make use of NAT. However, the use of NAT in CI is indefinitely shelved (https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/9021), so perhaps we shouldn't worry about that.
Private, shared, and gitlab-shared runners are already provisioned into different subnets as part of this work, but they're all in the same project and us-east1.
After gathering thoughts, it might make sense to promote this to an epic before beginning work, and creating smaller issues.
RFC caches,ci,queues team (@dawsmith @mwasilewski-gitlab @cmcfarland @ahanselka @msmiley @igorwwwwwwwwwwwwwwwwwwww @ggillies), @tmaczukin @erushton, @ansdval