Allow Failures and Stop Pipeline
Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.
Summary
I would like to specify a job can fail with allow_failure, but I also want to stop the pipeline when that "allowed failure" occurs. I was surprised to find this is not the case; if job1 has allow_failure enabled, and job2 needs job1, then job2 will run even if job1 fails.
In short, I think the issue is this... allow_failure enables two different functionalities: mark a job failure as a "warning" instead of an "error", and keep the pipeline running on failures. I want the former, and not the latter. Is this functionality supported?
Motivation
I have one two jobs: a package release job, which checks if the package version should be released, and a tag job, which creates a Git tag if there is a new release. I would like these two jobs to be separate because they are enabled with different CI flags. Unfortunately, due to Python tooling, I cannot know if the package version is new until the release job runs.
If the package version is not new, the Python tool I am using (twine) returns an error code. I can disable this error code with a flag, but then I'll have no indication that the package version is not new.
Right now, I've configured the release job with allow_failure. This practice makes sense to me —— if the twine upload call fails because there is no new package version, that doesn't really count as a "failure". I want the CI to note that no package was released. But when the first job fails with allow_failure enabled, the pipeline keeps running. I'd like to stop the pipeline.