2022-08-16: Increase max_client_conn for pgbouncer primary

Production Change

Change Summary

The number of client connections to the pgbouncers sitting in front of the primary (writable) db are again approaching saturation.

Here we increase pgbouncer's max_client_connections from 10K to 12K.

Context

One of the simplest options to mitigate incident #7565 (closed) is to again grow the max client connections limit on the nearly saturated pgbouncer processes. We did this 2 weeks ago in #7536 (closed), and we are repeating that limit increase now.

Judging from the pgbouncer processes' CPU utilization, we still have some headroom to increase that client connection count. But we should consider adding additional pgbouncer instances in the near future, as a capacity planning activity.

Caveat: On future occasions when we lose a pgbouncer or need to do rolling restarts for maintenance, the last pgbouncer to get restarted will be at risk of CPU saturation due to the imbalanced number of client connections temporarily being handled by that instance. To mitigate that imbalance, we need to add more instances to the pool of pgbouncers. Apart from that exception, today's tuning adjustment should be safe and forestall the current risk of saturating the client connection pool.

Our goal is to avoid this saturation of the client connection pool:

Screenshot_from_2022-08-16_09-39-07

source

Without also driving the CPU utilization of pgbouncer processes above 50% or so:

Screenshot_from_2022-08-16_09-40-59

source

Change Details

  1. Services Impacted - ServicePgbouncer
  2. Change Technician - @msmiley
  3. Change Reviewer - @T4cC0re
  4. Time tracking - 30 minutes
  5. Downtime Component - none

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 25

  • Set label changein-progress /label ~change::in-progress
  • Disable chef-client on all nodes: knife ssh "roles:gprd-base-db-pgbouncer-pool" -- sudo chef-client-disable 'https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7607'
  • Merge chef MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2196
  • Run on sidekiq:
    • knife ssh "roles:gprd-base-db-pgbouncer-sidekiq" -- sudo chef-client-enable
    • knife ssh "roles:gprd-base-db-pgbouncer-sidekiq" -- sudo chef-client
    • knife ssh 'roles:gprd-base-db-pgbouncer-sidekiq' -- sudo grep 'max_client_conn' /var/opt/gitlab/pgbouncer/pgbouncer.ini: Output should be 12000
  • Monitor for 10 minutes
  • Run on ci:
    • knife ssh "roles:gprd-base-db-pgbouncer-ci" -- sudo chef-client-enable
    • knife ssh "roles:gprd-base-db-pgbouncer-ci" -- sudo chef-client
    • knife ssh 'roles:gprd-base-db-pgbouncer-ci' -- sudo grep 'max_client_conn' /var/opt/gitlab/pgbouncer/pgbouncer.ini: Output should be 12000
  • Monitor for 10 minutes
  • Slowly rollout on the rest of the fleet:
    • knife ssh "roles:gprd-base-db-pgbouncer-pool" -- sudo chef-client-enable
    • knife ssh -C 1 "roles:gprd-base-db-pgbouncer-pool" -- sudo chef-client
  • Merge corresponding runbooks MR to update the saturation metrics: gitlab-com/runbooks!4901 (merged)
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 5

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Matt Smiley