Presently we cannot distinguish paid features from core features in our RSpec feature specs.
This prevents us from being able to see integration test coverage accurately.
Proposal
We have a couple ways to do something like this:
Introduce an RSpec tag to distinguish EE features. it 'is an ee feature', :ee do
Refactor all feature specs to be put in their own directory. Such as: spec/features/ee/...
No. 1 would be the easiest to implement and would be the least intrusive.
What does success look like, and how can we measure that?
We can accurately distinguish paid GitLab features vs Core GitLab features that are being tested with integration tests.
Edited
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
I am not really sure this will be what we're looking for here though, since here we seem to want to know the "coverage" of features, not coverage of lines of codes.
By aggregate, we mean a single report for both ee + others ?
It doesn't have to be an aggregate. I am ok of we have 2 reports one for ee and others for core. That way it's clear which part of the product is broken if the test fails.
Thanks @rymai. I don't think the intent here is for the code coverage. We would want a list of integration tests under rspec/features for Enterprise features in a report form. What passes and what fails.
The context from here is the EE test gap work which some tests are not optimal in the end-to-end layer, we add them as integration tests instead. As a result we should display the results of these tests along with the end-to-end tests.
@meks I see. FYI we currently have 15936 EE-specific tests, and you can generate an HTML report for all these tests as a dry-run with bin/rspec --dry-run -Ispec -f html -- ee/spec > dry-run-ee-specs.html.
I figured that we'd end up using the rspec html formatter, at least as a first iteration. The Rspec html formatter has a lot to be desired, but is a great first iteration.