Option to report pipeline as passed when being blocked by further manual stages
### Problem to solve
<!-- What problem do we solve? -->
It is not possible to mark a pipeline as `success` when blocked by further stages which must be triggered manually (deploys). That is misleading for developers, because from their point of view the pipeline _is_ successful.
### Intended users
<!-- Who will use this feature? If known, include any of the following: types of users (e.g. Developer), personas, or specific company roles (e.g. Release Manager). It's okay to write "Unknown" and fill this field in later.
Personas can be found at https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/ -->
Developers, devops
### The problem
<!-- Include use cases, benefits, and/or goals (contributes to our vision?) -->
Our pipeline looks like this:
- Build
- Test
- Unit
- Static check
- Code quality...
- Test cleanup
- Deploy to staging
- Staging cleanup
- Deploy to prod
- Prod cleanup
Jobs from the first three stages are run automatically. However, for obvious reasons, the deploys need to be triggered manually. When a Deploy (staging/prod) is triggered, it should finish AND after it does, the corresponding cleanup stage should be run.
Therefore I added this to their definition:
```yaml
when: manual
allow_failure: false
```
The problem is that when the automatic part of a pipeline finishes successfully, its status is `blocked`. That is confusing for developers, because for them "tests passed = successful pipeline".
If I did not disable the `allow_failure` step, then the pipeline would report `passed`, but the problem is that `Staging cleanup` and `Prod cleanup` would be automatically run and would not wait for their "parent" job. That is also not a desired behavior.
~~The reason why the `Deploy to staging` and `Staging cleanup` stages are different is because I need to run the cleanup regardless of whether the deploy succeeds or fails. Having this logic in the same job would be cumbersome because I would have to check the status of a pipeline before every `script` line.~~ I found out that I could use `after_script`, so this point is moot.
### The solutions
I can think of two solutions to this case.
#### Report pipeline status from a job
A new configuration parameter (or pipeline API endpoint?) could be introduced that would allow setting the pipeline's status.
#### Wait for manual jobs
A configuration parameter could be introduced that would behave similarly to `dependencies` with the difference that in this case the job would _wait_ for the upstream job, _even if that upstream job is `manual`_.
issue