Allow CI jobs to be marked as utilizing external resources to avoid overlap
Problem to solve
In our deployment CI pipeline, we have separate CI jobs that build, test, and deploy the application. Part of our testing involves utilizing an external server that our app communicates with. At the beginning of the job it resets the external server to a known state, then runs the tests and reports the results.
The issue is, because there's only one external server (and it's a closed-source application server, so we can't Dockerize or clone it), if there are multiple Pipelines being run at once in GitLab, when the second iteration of the "test" job starts, it clears out the external server, which may be mid-test of a different job.
What I believe would solve this issue is a way to indicate in the CI configuration file that a given task must be run in a singleton fashion. Then, if any runner is mid-execution of that particular task, a second running of that task in a different pipeline would queue up (same as the situation when there's no runners available) until that other iteration completes.
The current workflow for dealing with this is to if the pipeline fails on that job, go look at the project's overall "Pipelines" list and see if there was something running at the same time, and if so, re-trigger the pipeline again and hope that no one else re-triggers theirs at the same time too.
Target audience
Sasha Software developer
Further details
Not likely an issue for small teams, but becomes a bigger problem as the development team grows (more developers working on feature branches simultaneously).
Proposal
Add a configuration option for the CI job configuration to indicate "cannot be run at the same time in multiple pipelines in the same project". Update logic for doling out jobs to runners to look for jobs whose names match the upcoming job's name and if that flag is set and the names match, queue up the job to wait for the first execution to finish.
What does success look like, and how can we measure that?
Pipeline runs are deterministic; re-running a failed pipeline with no code changes should not "magically" turn into a success.