Test case management and test tracking in a Native Continuous Delivery way
Proposal
After brain-storming with @victorwu we came up with the following design as MVC.
Test planning (part1)
Test case
An object to store test scenario.
- Title (GFM)
- Description (GFM, so that you can task lists)
- Required fields: Test type (API/UI/Visual), Test Status (Manual/Automated, etc)
- Nice to have: Associated issue id of issue that implemented the feature you are testing for.
- Nice to have is native steps ( execution step / expected step)
- These steps typically involve: Set up data, environment, variables, click here, press that, verify this, verify that.
- There will be a pass / fail result to the entire test case when it is executed. But the test case itself is just a class/blueprint.
Test suite
A Group of test cases, mapping to a feature area. So you will have probably at least one test suite per GitLab stage, e.g. Plan: Issues test suite Plan: Epics test suite Create test suite etc.
- Testsuite 1
- Test 1
- Test 2
- Testsuite 2
- Test 3
- Test 4
Testrun (part 2)
Test session
An instantiation of a test suite For GitLab, when you have a test run, which can be a manual test run from a test engineer. This can happen during development of a feature. But for us, likely we will do this at least once per release, as part of QA/FA/blah. Or it can be a nightly run.
For other customers, likely you will have a test session as part of a development-release process Test session will pass / fail based on individual test cases pass / fail.
Fitting this together
Do test management / test planning per project.
- Test suites and test cases: Add edit test cases and grouping them into suites.
- Test sessions => Managing test sessions, creating them, tracking status and etc
Test cases and test suites are new objects to GitLab.
The gitlab-ce~2278656 team is currently iterating using google sheets to track test coverage at the End-to-End layer.
There are also multiple conversational tracks on how we should track this going forward. There are multiple things to consider when tracking tests / test automation coverage.
Conversations:
- https://gitlab.slack.com/archives/C3JJET4Q6/p1537408187000100
- https://gitlab.slack.com/archives/C3JJET4Q6/p1536594859000100
- https://gitlab.slack.com/archives/C7S4KUEPN/p1526665067000455?thread_ts=1526664503.000508
Key points:
-
If we are doing continuous delivery in the right way, there should be little to no manual tests. Manual tests if any should be minimal and exploratory.
-
There is little need for detailed test steps in the CD world. Test tracking is very different than the legacy system that has hardcoded step1 step2 and etc
- Exploratory testing exploratory has a free form emphasis which removes the importance of hardcoded test steps. We describe what we want to achieve out of the test with clear input and outputs but the way to get there is not set in stone.
- Detailed hardcoded test steps are a high-maintenance document. We don't want to update steps when a button changes the color or position.
-
For manual clicking through, we would like to keep this free form. The disadvantage of going into too much detail with test-cases & adding clicking steps is we are not able to catch bugs outside the hardcoded flow in the document, its also a high maintenance document that needs to be updated when button moves around / workflow changes / color changes and etc. This is a page taken from the book
How Google Tests Software
Just enough description to tell people what to test helps fuzzes the workflow up and we end up catching more critical bugs that way.
-
-
An emphasis on risk-based test planning using the ACC framework.
-
Most tests should be automated, there should be an emphasis on importing results from a given test run.
-
We should be able to categorize test automation types. (API, UI , Visual)
-
We should be able to group the tests to link to a feature area.
- Currently using sub indent levels in sheets and seperate sheet
Existing players in the market
- Quality Center https://software.microfocus.com/en-us/products/quality-center-quality-management/overview
- Test Rail https://www.gurock.com/testrail
Requirements
- A place to store test cases and group tests to map to a feature and product area.
- A way to track configuration of tests (or group of tests) to be run for a given stage in the CI pipeline. J
- From @at.ramya > Just like how the development/design/ review stages are captured, the development of test automation code, the configuring of tests for that issue should be captured in that issue itself. (This falls in line with "moving to CD" model as well)
- Based on the choice of tests made in gitlab-ce#2, a "Test Session" is created tp store the results.
- When the CI/CD pipeline job is triggered, the specified tests should be run and the results are recorded in the test session
What we have currently
This is the Test Planning (part1) in the proposal End-to-end test coverage tracking: https://docs.google.com/spreadsheets/d/1RlLfXGboJmNVIPP9jgFV5sXIACGfdcFq1tKd7xnlb74/edit#gid=1686687030
What each area means
- Red
- Test area, feature group and test name - Blue
- Test status meta- What type of test ?
- Is the end-to-end test covered in the lower level of the pyramid. This tells us the weak areas
- Is the test automated, if manual we need a brief description on workflow
- Green
- Test execution configuration in the CD pipeline- Does the test run on feature envs / review apps
- Does the test run on staging / canary and prod
Test setup parameters
We also currently use individual sheet to map to a product area.
Notes on what we would choose to build as MVC:
From @meks
There is an important piece I want to hi-light here. Do we want to chase after the legacy way of doing things e.g. Quality Center/TestRail or do we want to look ahead and do test management in a Continuous Delivery native way ?
What we are doing here is the latter. If teams are doing real CD. There is very little to no manual testing. So tracking is very different than the legacy system that has hardcoded step1 step2 and etc
From @mlapierre
I think the legacy way would provide a lot of value for some customers, but almost none for us. I think the greatest value for us would come from a replacement for that spreadsheet, plus a way to track test execution and results. What I'd like to see is a way to tag test code as tests. For example, in the blob viewer we could select a line of code that is the start of a test case, and somehow tag that as a test and give it a name and other metadata (maybe via a test case issue type, but fairly sparse because most of the information is in the test code). And then when that test is executed we capture data on the execution and results.
Session based tracking could work well for manual tests.As for automated tests, I expect work to enable tracking would be done in a later phase. But yes, since they're source controlled and automatically executed, all of the information about their execution is available, we just have to track it. One way could be to capture rspec logs and parse them
So essentially, I think the MVC for tracking test execution would be to capture and display the test results in a more user-friendly format than the text logs we currently have to interpret. I guess it's a separate issue from test management, but ultimately I'd like to see test cases and test execution results linked.