Introduce generic metric measurement (credits) for CI runners usage
Problem to solve
Today we have a single metric for Runner Minutes, which is wall clock minutes
.
With different machines we might want to have different tiers to accommodate for different prices of infrastructure to run builds.
Intended users
Proposal
I was thinking about runner minutes, different machines. We might consider different types of machines like Windows/Linux/OSX, or different sizes. We currently use minutes to model the quotas. One simple way to use current model to provide different tier of costs is to say that 1 minute is a metric, or dolar metric, similar to CPU time. You can use more/or less of CPU time based on amount of available cores, or quotas granted, and this CPU time is disconnected from Real Time. We could achieve the same with our minutes.
And:
- Linux: 1m of Linux machine cost 1 $ minute of quota,
- Windows: 1m of Windows machine cost 2 $ minutes of quota allowed,
- OSX: 1m of OSX machine cost 3 $ minutes of qutoa allowed,
- High-mem: 1m of High-mem Linux machine cost 2 $ minute of quota.
With that we could really easily implement different tiers, based on type of used machines, with adding ci_runners.minute_scale to define how much of quota each minute cost for the given runner, and maybe rename the minutes to not refer to wall clock minutes, but maybe something like minute $
or minute dollars
? Simple and yet effective, with minimal effort, to achieve tiering. And, we could also allow the minute_scale=0
to say that this machine is free to use, and don't consume any quota