Skip to content

[production] Increase pgbouncer connection pool size for sidekiq read requests to database read replicas from `2` to `15` per replica node

Production Change

Change Summary

Increase pgbouncer connection pool size for sidekiq read requests to database read replicas from 2 to 15 per replica node.

Change Details

  1. Services Impacted - [ pgbouncer ]
  2. Change Technician - @nnelson
  3. Change Criticality - C1
  4. Change Type - changescheduled
  5. Change Reviewer - @Finotto
  6. Due Date - 2021-04-05 2100 utc
  7. Time tracking - 30 minutes
  8. Downtime Component - no downtime expected/required

Detailed steps for the change

Pre-Change Steps - steps to be completed before execution of the change

Estimated Time to Complete (mins) - Completed

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 1 minute

Post-Change Steps - steps to take to verify the change

Estimated Time to Complete (mins) - 10 minutes

  • Execute the following command:
    export GITLAB_ENVIRONMENT='gprd'
    bundle exec knife ssh "fqdn:patroni-*-db-${GITLAB_ENVIRONMENT}*" 'sudo grep "gitlabhq_production_sidekiq" /var/opt/gitlab/pgbouncer/databases.ini' --concurrency 1
  • Verify that the output for all hosts is:
    gitlabhq_production_sidekiq = host=127.0.0.1 port=5432 pool_size=15 auth_user=pgbouncer dbname=gitlabhq_production

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 10 minutes

  • Revert the merge request above, and add a link to the reversion merge request here: Reversion merge request: https://ops.gitlab.net/gitlab-cookbooks/chef-repo/-/merge_requests/0000
  • Have the reversion MR reviewed by a colleague.
  • Apply the reversion MR to production by repeating the change steps from above.
  • Repeat the post-change steps from above, but ensure that the output for all hosts is instead:
    gitlabhq_production_sidekiq = host=127.0.0.1 port=5432 pool_size=2 auth_user=pgbouncer dbname=gitlabhq_production

Monitoring

Key metrics to observe

Summary of infrastructure changes

  • Does this change introduce new compute instances?
    • No
  • Does this change re-size any existing compute instances?
    • No
  • Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc?
    • No

Summary of the above

Changes checklist

  • This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities.
  • This issue has the change technician as the assignee.
  • Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed.
  • Necessary approvals have been completed based on the Change Management Workflow.
  • Change has been tested in staging and results noted in a comment on this issue.
  • A dry-run has been conducted and results noted in a comment on this issue.
  • SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
  • There are currently no active incidents.
Edited by Nels Nelson