Skip to content
GitLab
Next
    • GitLab: the DevOps platform
    • Explore GitLab
    • Install GitLab
    • How GitLab compares
    • Get started
    • GitLab docs
    • GitLab Learn
  • Pricing
  • Talk to an expert
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
    Projects Groups Topics Snippets
  • Register
  • Sign in
  • GitLab FOSS GitLab FOSS
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
    • Locked files
  • Issues 23
    • Issues 23
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Merge requests 0
    • Merge requests 0
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Container Registry
    • Terraform modules
    • Model experiments
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • Code review
    • Insights
    • Issue
    • Repository
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • GitLab.orgGitLab.org
  • GitLab FOSSGitLab FOSS
  • Issues
  • #52908
Closed (moved) (moved)
Open
Issue created Oct 19, 2018 by Dan Gordon@dangordon

Identify tests which are ok to fail

Problem to solve

There are some use cases where it's ok for a specific test to fail, but we don't want the pipeline to be stopped because of that failure. For example:

  • In test driven development, we might define all the tests up front, and continue to build (and fail) until we've introduced enough code for all the tests to pass. It'd be helpful to mark the tests we know we weren't ready for as "ok to fail".
  • A test might be failing for a reason other than the code (eg. external service unavailable), but we still want to complete the pipeline to test the rest of the code
  • A failing test might not be relevant to a change that's been made and needs the pipeline to complete (eg. typo on website to push out, but absolute links test is failing)

It is possible today to mark an entire job as "allow_failure", which does not stop the pipeline if the job itself fails (including any test within the job). However, this does not account for the rest of the test suite needing to be run successfully still.

Proposal

Identify and track the results of tests (perhaps just in the "test" job?) and allow users to mark individual tests as "allowed_failure" (quarantined). In subsequent runs, don't fail due to test failures of those marked tests. Allow for removing the flag so that later the test can effect the outcome of the job again.

Links / references

https://confluence.atlassian.com/bamboo/quarantining-failing-tests-289276886.html

Assignee
Assign to
Time tracking