Create an open and shareable Geo benchmark dataset
Problem to solve
We currently have no way to benchmark Geo functionalities in a controlled fashion based on a standardised datasets. This means that we are not easily able to measure replication speed, find performance regressions. We also are not able to easily test that replication is complete in a more realistic scenario with a mixed GitLab project and dataset, which is relevant for DR. Finally, customers can't benchmark performance improvements easily when deciding to adopt Geo.
I suggest we create a single or several open benchmark sets that can be used for various testing and performance evaluation tasks inside group::geo.
- Create an open benchmark set (or several) to validate performance and allow for more realistic testing.
Yes, this would need to be documented
This could be incorporated into automatic testing procedures.
What does success look like, and how can we measure that?
- Created one or several open benchmark datasets
- Created an automated GitLab Geo performance benchmark
- Publish Geo benchmarks for deployments on GCP and AWS.
What is the type of buyer?