Switch to opinionated data and a "master" data generation script for performance tests
After identifying that our Groups and Project tests currently just run against the target project and group. We saw, after a customer starting running them, that these can be inconsistent on different environments if there's more subgroups and projects
The tests are to test if the endpoints are performant in a consistent way without any wildcards so they can also be compared. As such we should explore a way to make it so these tests are always done with the same conditions.
After exploring several possible solutions the aim now is to create a "master" data script that will set up a specific data set for the GPT tests to run against. This data is to be "locked" down and opinionated so test results can be truly comparable across environments. We're looking to make it idempotent as well with the aim being that users should only need to run it once every GPT release and their existing data is updated with any new required entries.