Allow blocking manual stages (allow_failure: false) to block pipeline without flagging the pipeline as 'blocked'
My dream pipeline would work like this:
- test (using prebuilt lean image for speed)
- build app image (manual (pipeline shows passed indicating the commit passed testing))
- deploy script (triggered after stage 2 passes)
- clean up failed script/slack ping (on_failure: true (only if the deploy script fails)) In a perfect world stage 3 would only run after I trigger stage 2, and stage 4 would only run if stage 3 fails.
Dev's only care about stage 1 passing to confirm their commit, if I put in a blocker (allow_failure: false) on step 2 it flags the branch as blocked (Github check) and they have to go into Gitlab to see that their tests did in fact pass. Obviously this also doesn't work because it would skip building and go straight to deploy.
What I'm doing right now is this:
- deploy (manual stage that builds, then deploys)
All I can do right now is rely on the lack of successful deploy message sent to slack, which is obviously not very effective. I do have the slack notification integration but it alerts on any failed pipeline (99% of the time test failure) and there's no way to only have it alert only failed deploys so it's useless. When I'm lucky I get pinged by someone that notices it failed, other times it goes unnoticed and we have people merging their branches into master thinking it was deployed and looked good.
Few ideas on how to get around this:
- Adding another option like
when: manual-waitthat actually halts the pipeline but still allows the previous stages to be in a passed/green state.
allow_failure:falseto silently block without flagging the pipeline.
- Be able to add a tag to
when: manualso that you could group several manual stages to run together one after the other. Such as
when: manual &deployassigned to step 2,3,4 you click play on any of them and it executes them in order.