Benchmark data-loss-mitigating postgres cluster configurations
Depends on https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8594, sub-issue of https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7282.
As per the handbook, data loss should be prevented, and this takes priority over availability.
Using the configurations produced in https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8594, perform performance benchmarks to determine the likely impact of a production rollout of each config.
I'm not an expert on postgres benchmarking, so I'll defer to @gitlab-org/database-team, @Finotto, @abrandl, @emanuel_ongres on how we currently do this / should do this. One thing I've seen done in a previous job is the use of pgreplay(-go) (https://github.com/gocardless/pgreplay-go, https://github.com/laurenz/pgreplay) to replay traffic based on real production logs against a sandbox cluster, and then use pgbadger to analyse the resultant logs from that to derive a benchmark.