Execute database synthetic TPC-B-like benchmarking with pgbench
Objective
Perform a TPC-B-like benchmarking with pgbench with different concurrency levels to evaluate how scalable each hardware is, and how saturation will impact response time/latency and RPS over the hardware to be evaluated.
Workload:
Benchmarking hardware nodes:
-
n2d-standard-224- patroni-main-machine-test-01-db-db-benchmarking.c.gitlab-db-benchmarking.internal -
n1-highmem-96- patroni-main-machine-test-a-01-db-db-benchmarking.c.gitlab-db-benchmarking.internal -
n2-highmem-128- patroni-main-machine-test-b-01-db-db-benchmarking.c.gitlab-db-benchmarking.internal -
n2d-highmem-96- patroni-main-machine-test-c-01-db-db-benchmarking.c.gitlab-db-benchmarking.internal
The -s should be a scale factor sufficient for the dataset not fit completely in memory, hence we should observe some degree of I/O reads. Suggestion is: 1.3 TB which is 1.5 times RAM size of n2d-standard-224.
To simulate concurrency, we can start with 50 clients and increment by 50 on each cycle until we reach (CPU or I/O) saturation on all 3 nodes for us to find what is the limit of concurrency for each hardware.
Concurrent client connections cycles should be:
- 50
- 100
- 150
- 200
- 250
- 300 (optional until saturation is reached)
- 350 (optional until saturation is reached)
- 400 (optional until saturation is reached)
Parameters:
PG Parameter settings should be customised for each hardware:
- max_connections: TBD
- shared_buffers: TBD
- effective_cache_size: TBD