Verify - Highest level categories are too test process-specific

Existing Verify Product categories:

We're headed in the right direction with what Verify offers, but we should consider addressing our messaging to:

  1. not limit our options and
  2. communicate our solutions at the higher level problem - quality.

Quality is a difficult topic to address that can be broken down into many different characteristics. These are high level and not likely to change. To explore a solution for quality problems, we have developed many different kinds of testing. Since these are typically specific processes and can be related to various types of tooling, types of testing will likely continue to change.

Updating our message

I suggest we not target specific test tooling or techniques in our top-level Verify messaging, but the overall quality characteristics Verify will help our users address. For example, we might modify our product categories from the existing listing to something like the following list. Note that there are no 'Testing' processes specifically called out, those items have been replaced by similar quality criteria:

  • CI (no change)
  • Code Quality (no change)
    • This was already good as it's a high level category that can be addressed by different kinds of static analysis as we see looking at all the different plugins for Code Climate. This is what we should do for the other categories.
  • Performance
  • Capability
  • Usability
  • Security
  • Compatibility

I chose those quality criteria based on common quality characteristics in the context-driven testing world that have been widely embraced by testing communities like Ministry of Test and many software testing experts. When we use something like that, we can fold specific types of testing underneath these general categories, and can even mix and match, which is what customers actually do to understand the quality of their solutions.

Test Process sub-categories??

So the next natural tendency will be to put specific test processes under specific quality criteria, but the truth is, we can often use one test process to give feedback in several quality areas. For example, under Capability, a customer may have a mix of test processes that are manual, plus some UI automation with Selenium, plus API test automation with Insomnia, topped off with Visual Testing. However, there may be exploratory (manual) and Visual tests that feed Usability, Selenium and Insomnia tests that feed Performance, and other Selenium Tests that deliver Compatibility data.

So my personal preference would be to enable customers to pick and choose which test tools (even specific tests/suites) that would feed up under these quality criteria, treating these high level quality criteria as dashboards that aggregate relevant data. That would enable a test team to be able to more confidently say "we're meeting our functional requirements well, however, performance remains a problem ... here's where we're deficient in that area."

I know this seems like a big divergence from our high-level vision, but it's really just repackaging the tooling we already, and intend to, support in a way that gives us more flexibility to handle changes in the testing world. And it's a step toward addressing one of the biggest problems that exist in software testing - meaningful metrics.

/cc @meks

Assignee Loading
Time tracking Loading