Add 'Metrics' report type to merge requests
Problem to solve
GitLab (~Verify) provides a lot of great reporting tools at the merge request - JUnit reports, codequality, performance tests, etc. While JUnit is a great open framework for tests that "pass" or "fail", it is also important to see certain numeric type metrics from a given change. It would be great to have a generic, open way to report these.
Initial proposal
Basically, add another widget that shows data from a `metrics.txt` as we show for performance or test results
Intended users
- Delaney, Development Team Lead, https://design.gitlab.com/research/personas#persona-delaney
- Devon, DevOps Engineer, https://design.gitlab.com/research/personas#persona-devon
Further details
TBD
Solution
User stories
# | User Story | Persona | Scope |
---|---|---|---|
1 | As a user I want to be able to record results myself, over time (e.a. memory usage for GitLab/runtime of a benchmark suite/memory usage of a new omnibus installation) | Persona: Systems Administrator | Current |
2 | As a user I want to be able to tracking changes in memory usage on a MR | Persona: DevOps Engineer | Current |
3 | As a user I want to be able to track changes to load testing results (such as for GitLab in https://gitlab.com/andrewn/gitlab-load-kit) | Persona: Systems Administrator | Current |
4 | As a user I want to be able to provide built-in auto-devops results | Persona: DevOps Engineer | Future |
5 | As a user I want to be able to push other code metrics, such as code-complexity or code coverage stats | Persona: Software developer | *Future( |
Technical user stories
# | User Story | Scope |
---|---|---|
1 | As a developer, I want to keep the scope to a minimum | Current |
2 | As a product manager, I want to see if we can align to a commonly/wide used format such as open tracing (basically similar to Prometheus) | Current |
3 | As a developer, I haven’t dealt with OpenMetrics before and would prefer JSON | Current |
4 | As a designer, I want to be able to reuse existing components and designs from other merge request reports as much as possible. This so I can keep scope to a minimum and future updates to similar components affect this feature as well | Current |
5 | As a FE developer, I want to have comparison happen on Back-end, not the frontend. Similar to the existing reports for tests | Current |
Acceptance criteria
- Feature will be called
generic metrics
- Positioned at a similar merge request widget information hierarchy level as Code Quality and Browser performance testing
- Metric output will just have a before and after string value output (this enables comparison, but no bias)
- Format of metric output:
Name 1: Value 1 (Previous Value 1)
- Metrics will be shown regardless of there being a difference.
- When there are no changes there is no icon change and metric output will state
Name 1: Value 1 (No changes)
- We'll do no additional processing of value outputs (think link pattern recognising). This might be an opportunity for iteration though.
Merge request widget information hierarchy
- Merge source and target indicator
- Pre-merge Pipeline, Coverage, and deployment information
- Approval actions/information
- Code quality section
- Browser performance testing
- Generic metrics
- License management
- Security scanning section
- SAST scanning
- DAST scanning
- Dependency scanning
- Container scanning
- Test summary for JUnit test reports
- Individual Junit test reports
- Merge status/action/consequence
- Post-merge Pipeline, Coverage, deployment information
Schema
JSON Schema
{ "$schema": "http://json-schema.org/draft-07/schema#", "definitions": { "metrics": { "type": "array", "items": { "type": "object", "required": [ "name", "value" ], "properties": { "name": { "type": "string", "examples": [ "extra_metric_name" ], }, "value": { "type": "string", "examples": [ "metric_value" ], } } } } }, "type": "object", "required": [ "new_metrics", "existing_metrics", "removed_metrics" ], "properties": { "new_metrics": { "$ref": "#/definitions/metrics" }, "existing_metrics": { "$ref": "#/definitions/metrics" }, "removed_metrics": { "$ref": "#/definitions/metrics" } } }
Sample payload:
{
"new_metrics":[
{
"name": "extra_metric_name",
"value": "metric_value"
}
],
"existing_metrics":[
{
"name": "metric_name",
"value": "metric_value"
}
],
"removed_metrics":[
{
"name": "second_metric_name",
"value": "metric_value"
}
]
}
Designs
UI requirements
In terms of specs, we will be able to rely on existing measures already in the application. These will mostly be copy changes. Other required details will be written down here:
New icons used:
status_created_borderless
Colors used for no bias icon:
#2e2e2e
Permissions and Security
TBD
Documentation
TBD
What does success look like, and how can we measure that?
- Implemented solution aligns with acceptance criteria
- Track usage of this new feature, including how many different custom report metrics
What is the type of buyer?
Links / references
Edited by James Heimbuck