Cost estimation for CI runs
This idea arose out of the Build FGU on 1/31, where @marin presented his finding that each Omnibus build we create costs us €2, with the Docker container build incurring additional costs. This was part of the impetus to optimize our build times, to both reduce the time it takes as well as the costs incurred.
I wonder how many other companies are really aware of the costs of their CI:
- If they have shared infrastructure, backing out individual project usage is probably non-trivial
- While build minutes may be an approximation within a project, usage of different machines or jobs that utilize more than one machine would throw these numbers off.
- A more accurate estimation would factor in the actual CPU/Memory utilization of jobs, as a lot of these are likely running in shared environments (Docker, VM, Kubernetes) and therefore actual usage matters. (i.e. a job running a small micro service would consume less CPU/Memory than a job building Omnibus GitLab.)
Given that GitLab has integrations with Kubernetes, performance metrics, and its own CI system we may be uniquely positioned to build a feature around this to automatically generate pretty good estimates of cost for each CI job.
For example we could track the CPU/Memory used during a particular job, tie that back to the hardware it ran on (e.g. the node type), and then generate a cost estimate. This could then be rolled up based on the frequency each job is ran, to provide summary level statistics.
- Are there CI jobs that are consuming significant resources, that could stand to be optimized? Or perhaps used more judiciously, or flagged as manual?
- For shared CI pools, what projects consume the most resources and cost?