Review Postgres `max_connections` for larger environments
When testing against the Reference Architectures, specifically our largest - the 50k, we found that some endpoints at our chosen test throughput rate of 1000 RPS would cause connection exhaustion in Postgres at it's default value of 200. Through some trial and error we found that Postgres required just over 300 connections to handle the throughput.
At the time this was considered not to be abnormal. As such we should take the pain away from customers here and handle this more gracefully. Options are to just increase connection count or perhaps pin it dynamically based on available CPU count?
Edited by Grant Young