Add a new ResourceGroup process_mode: "newest_ready_first"
-
Please check this box if this contribution uses AI-generated content (including content generated by GitLab Duo features) as outlined in the GitLab DCO & CLA. As a benefit of being a GitLab Community Contributor, you receive complimentary access to GitLab Duo.
What does this MR do and why?
Add a new ResourceGroup process_mode: "newest_ready_first"
The problem with newest_first in situations with frequent merges and continuous deployment is that you can end up in a situation where the "newest" pipeline job is forever waiting on the next-added, and you get zero deploys.
This new mode will still grab the newest job, but it will not wait for created or scheduled jobs, instead only grabbing jobs which are in waiting_for_resource state. This means that even in the face of lots of new jobs that are (for ie) in the state "created" you will still get deploys, but when multiple pipelines reach the waiting for resource state while another job has claimed the resource, only the newest one will actually run.
I decided to add a new process_mode over modifying the old newest_first one as I believe that would be a breaking change, and there are almost certainly use cases for both modes. For example, imagine a job that runs every 3 hours on a schedule that should publish the documentation. An intended behaviour could certainly be any merges completed before the job is triggered regardless of if they have finished their own pipelines. I'm sure there are a plenty more examples, some are hinted in these linked issues (thanks @mfanGitLab for finding):
- #450708 (closed)
- #362844
- https://stackoverflow.com/questions/75287660/why-do-jobs-using-gitlab-resource-groups-not-get-started-when-they-are-ready
How to set up and validate locally
I have no idea but would appreciate advice
MR acceptance checklist
Evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.
Related to #450708 (closed), in that it kinda solves the problem I have raised on that issue in the comments, but not the original issue.