Add status checking behaviors to pipeline triggers
Problem to Solve
Our current implementation of multi-project pipelines is quite limited. One of the most important things that are missing is a feedback mechanism. We can trigger an external pipeline using a multi-project pipelines feature, but the problem is that a triggering pipeline does not wait for the external pipeline status, thus there is no feedback about whether the external pipeline succeeded / failed in the first pipeline.
Right now we need to use custom scripts implementing a loop with API polling, which is not a great solution, especially when we want to create a little more complex multi-project pipeline with upstream feedback.
There are use cases where you want a pipeline to wait for a sub-pipeline to finish, but other cases where you don't. And then even if you waited for it to finish, some cases where you wanted the parent pipeline to fail if the sub-pipeline failed, and other cases where you don't.
expand for usecases
Failure attribution, mentioned in "MVP" above, is a pretty critical feature. It should really have gone hand-in-hand with triggered builds. From what I remember, Bamboo has supported triggered builds, and more importantly, evaluating the success of triggered builds, for years.
I'm specifically interested in failure attribution for gitlab-ee#39640. I even wrote a small tool called pipeline-commander to trigger the build from within a pipeline. That seems to be far more complicated than letting GitLab's job executor handle it though.
Ideally, we would have a 1st-class YAML field for triggered builds from a job, accepting
some kind of authorization token (mandatory)
project number or namespace-path encoding (mandatory)
git reference, e.g. tag, branch, commit (optional, default to latest from master) In all of those cases it should be possible to use environment variables for filling in values.
Also, to be very clear, the absolute best feature that this would enable is automatic tagging of (candidate?) releases after all testing (including integration) had passed successfully!
That's kind of the Holy Grail of DevOps, no?
- execution of the current job should be suspended, allowing the current runner to execute other jobs
- the triggered pipeline begins execution using any available runner (not necessarily the one that was suspended)
- once the triggered pipeline has finished running, another GitLab runner resumes execution of the suspended job
- if all jobs in the triggered pipeline were successful, then the pipeline status should report success
- otherwise, the triggered pipeline status should report failure (as with any POSIX process, the return value of the process should be non-zero - it would be incredibly useful for GitLab to report that as well, potentially on a per-job basis)
Implement first-class multi-project pipelines triggers with upstream feedback.
trigger job is the key element here. It will be considered a job that acts as a pointer to another project's pipeline (the downstream pipeline).
trigger job can behave in 2 ways:
- No strategy has been set
dependstrategy has been set
Additionally, this job can either get the status
failed. In case it fails because of a downstream pipeline failing it should receive a dedicated failure state similar to
script failure. It will be called
downstream pipeline failure. This will behave similarly to
script failure as in that it does not need an additional explanation call out on the job detail page.
The job will start the downstream pipeline, but will not wait for it, and will never act on the status (this is existing behavior). This job will, in this case, not represent the status of the downstream pipeline. The job will simply succeed after having started the downstream pipeline (immediate effect).
This is the default strategy if no strategy has been set.
The job will start the downstream pipeline, wait for it to finish, and the current pipeline can proceed if the status is successful.
This job will represent the status of the downstream pipeline.
This means that the current pipeline stage will wait before succeeding to the next stage until this job finishes. Then the pipeline will continue only if the job is successful similar to other jobs.
If the trigger job has this option set as well, then the current pipeline can proceed regardless of the status.
The job will in that case still represent the status of the downstream pipeline, but if failed will be represented as
failed, but allowed to fail.
If no strategy has been set, this line will have no effect as the trigger job will always succeed if triggering the downstream pipeline.
first_job: allow_failure: true trigger: project: my/project strategy: depend second_job: allow_failure: false trigger: project: my/project strategy: depend third_job: trigger: project: my/project strategy: none fourth_job: trigger: project: my/project
What does success look like, and how can we measure that?
Upstream pipelines are now able to depend on downstream pipelines to finish and change their status accordingly.