Problem to solve
As pipelines grow more complex, a few related problems start to emerge:
- The staged structure where all steps in a stage must be completed before the first job in next stage begins causes arbitrary waits, slowing things down
- Configuration for the single global pipeline becomes very long and convoluted, making it hard to understand
- Imports exacerbate the above item, and create the potential for namespace collisions where jobs are unintentionally duplicated
- Pipeline UX can become unwieldy with so many jobs and stages to work with
Additionally, sometimes the behavior of a pipeline needs to be more dynamic. The ability to choose to start sub-pipelines (or not) is a powerful ability, especially when the YAML can be dynamically generated on the fly.
Navigation between child/parent pipelines must use our existing upstream.downstream visualization approach:
If a parent (originating) pipeline was able to trigger a set of concurrently running child pipelines, you could solve each of these problems:
- Child pipelines would execute each of their jobs still according to a stage sequence, but would be free to continue forward through their stages without waiting for unrelated jobs to finish.
- Configuration would be distributed out into each of the child pipeline configurations, reducing cognitive load to understand everything.
- Imports would be done at the child pipeline level, reducing the likelihood of collisions
- Each pipeline would have only the steps relevant, making it easier to understand what's going on.
You also get some nice benefits for doing things this way:
- By using existing triggering functionality, you can take advantage of
only: changestype keywords to trigger pipelines only when certain files change (this is valuable for monorepos, for example).
- By keeping the base (parent)
.gitlab-ci.ymlas a normal pipeline, it can have its own behaviors and sequencing in relation to triggers.
- Also by keeping it a normal pipeline, if someone doesn't use this feature, it just works exactly as you'd expect. No special configuration.
- By taking advantage of status attribution (https://gitlab.com/gitlab-org/gitlab-ee/issues/11238), the pipeline can wait for success of the child without any other special code/configuration required, wait for it to complete but not care about result, or can just trigger it and not follow it at all.
All of this will work with includes, so you can retain composability within the configuration.
This first issue will allow for one level of child pipelines (i.e., one parent, n pipelines) and child pipelines will not be able to trigger further downstream pipelines. We will address this in a fast-follow issue (https://gitlab.com/gitlab-org/gitlab-ce/issues/63566) where we will allow for some additional levels, with some limit to prevent infinite recursion.
Introduce a new syntax for triggering a child pipeline by pointing to a configuration yml:
microservice_a: trigger: yaml_from_repo: config/microservice_a.yml strategy: depend only: changes: - microservice_a/*
For this example, a triggering job called
microservice_a in the parent pipeline would be triggered that would use the yaml in
config/microservice_a.yml as its configuration. It would only run if there are changes in the
microservice_a folder, and it will treat it as a dependency (i.e., fail if the child pipeline fails.. alternatively, the
wait strategy would wait for it to finish, but not care if it passes or fails.)
Important note, this syntax must also be able to point to a built artifact, not just a file in the repository. This will enable workflows with dynamic generation of the child pipeline contents as follows:
generate_yml: script: - generate_config > custom/config.yml artifacts: gitlab_ci_yml: custom/config.yml run_custom_yml: dependencies: generate_yml trigger: yaml_from_job: generate_yaml strategy: depend
A pipeline triggered in this way would start running independently, but would have status attribution to a parent pipeline (via https://gitlab.com/gitlab-org/gitlab-ee/issues/11238). If you look at the parent pipeline's pipeline page, you must see that the pipeline is running or completed, and if you click on it, it must take you to the child pipeline's pipeline page.
By implementing things in this way, we do not need to radically change the user experience within GitLab and it works largely in the way you'd expect. There is always a parent pipeline since child pipelines can only be triggered from a parent.
Latesttag will not show on child pipelines
- Child pipelines will receive a new, dedicated tag indicating that they are child pipelines
GitLab QA and gitlab-qa#6 (closed), where we would like to trigger pipeline for GitLab CE or GitLab EE checks depending on project that triggering MR belongs to. This may be also interesting feature for GitLab Omnibus, to build EE / CE images easier. This is no longer the case with GitLab since we merged our projects, but the case is still illustrative.
- Child pipelines must not contribute to
- Child pipelines must not be taken for Merge Request status (again, only the parent pipeline matters but since it has status attribution this must be fine),
- Likely we must show child pipelines as part of
variables:of trigger job must be passed to the child pipeline allowing to fine-tune it,
- Child pipeline must inherit all settings of parent pipeline when running
Permissions and Security
Permissions and user context of the child pipeline will follow the parent pipeline, so there is no new security implication with this feature.
What does success look like, and how can we measure that?
Links / references
pipelineto include w/ local variables: https://gitlab.com/gitlab-org/gitlab-ce/issues/56214
- Slack Channel (Internal Link only) #f_child_par_pipelines has been defined.