Solution Validation: Quality Management MVC
Currently within GitLab, there is no easy way to understand the status of testing being performed. For most users, this status would be comprised of the following information:
- Test coverage - amount of product that is covered by test
- Test completion - has the testing in question been run against the current iteration of the product
- Test status - did the testing pass or fail
- Test type - what type of test was run (eg. GitLab would utilize
api-testfor a rest API test,
ui-testfor a Selenium UI test, etc...)
User Personas / Stories
Software Engineer in Test - SET
As a test engineer, I need to understand test coverage and test status around areas of the product I'm responsible for testing.
As a release manager, I need to know how much testing has been completed against the current product iteration so I can make an informed decision as to whether or not I can release the product.
Dependency on Requirements Management
As the Requirements Management category gains maturity, we are moving from simply documenting requirements to actually tracing them to tests. The Solution Validation for MVC of tracing requirements to test outlines the current plans.
At a high level, this solution will end up creating a status for each pipeline run called a Test Report Naming discussed here. From the perspective of the requirements management category, the current method proposed for viewing these test reports will be that there will be an additional data item added to each requirement showing the time that the requirement was last verified by test (discussion here).
Quality management is very much interrelated with Requirements Management.
What is the purpose of Quality Management
I've been working diligently to attempt to define what value the Quality Management category is attempting to bring to the GitLab product offering.
My current understanding (after talking to numerous customers about what they believe quality management means to their organizations) is as follows. Different industries appear to think very differently about quality management as noted below:
- Provide the ability to trace tests to requirements
- Provide an easy method of tracking which requirements have been fully verified with passing tests.
- Provide the ability to view and triage (open issues against) failing tests within the pipeline.
- Provide the ability to define and organize tests into logical groupings that can be configured to be run during pipelines.
- Track test results (pass / fail) over time to show maturity
- Ability to track defects in deployments (from other stages) across releases (see what release introduced the defect). Ideally, be able to trace back to the Issue / MR where the change was introduced.
One common theme between both industries is the desire to have an interface for viewing tests that have been run, and the results of these tests (pass / fail).
Given that the team is attempting to release test reports for requirements management in a near-term release, it seems that it would make the most sense to build a Quality Management on top of this functionality.
What this would mean is building Quality Management from the bottom up.
We propose the following MVC for Quality Management:
- Start with the idea of test reports and create a view that shows these test reports with the vital information (number of passing tests, number of failing tests).
- Build on top of that by allowing for history to be viewed (trendlines)
- Continue moving up the stack to provide the ability to group tests together.
Our proposed solution allows for test case management in a Native Continuous Delivery way. This solution will allow the following:
The current solution is proposed to provide a simple view of
test reports at the
Project level. This would allow for future iterations to have
test cases existing in the same project as the product under test.
The following section is a work in progress while we perform the solution validation.
TestRail & Quality Center - Both TestRail and Quality Center offer a way to view test reports over time, which indicates that our MVC will fill a known market need.
Our competitive advantage comes from the idea that GitLab is a single DevOps platform, and as we mature this category, we'll see added benefits and gains of being able to fully integrate our quality management solution with requirements, tests and pipelines all visible within the single tool.
- Both TestRail and Quality Center offer a more familiar way of Quality Management, allowing for the situation of having many different test platforms, manual testing, and hardcoded test steps where these steps are tied specific goals. This can be considered a legacy approach, where the Continuous Delivery approach favors more freeform and exploratory testing. The Continuous Delivery approach requires less maintenance and overhead while the legacy approach requires detailed test steps in a high-maintenance document.
Questions to Answer
Is moving toward a Continuous Delivery approach for Quality Management going to satisfy our customers?
- GitLab favors the Continuous Delivery approach for internal use / dogfooding.
- Organizations which require regulation will still need a way to link specific functional coverage to test steps / test cases.
Do DevOps focused organizations have a need for manual testing?
What are the three most important pieces of information our user personas desire related to Quality Management?