2024-10-14: Migrate latency-sensitive Sidekiq workloads to dedicated urgent pgbouncer pools

Production Change

Change Summary

Currently, we only have separate connection pools for Puma and for Sidekiq. All of the Sidekiq pods (irrespective of their urgency) use the same Sidekiq pgbouncer pools. This means that if we have a lot of jobs to process in a shard with high concurrency, it could easily use up a lot of pgbouncer connections and starve shards with lower concurrency, which leads to incidents like #6880 (closed).

But some of the workloads in Sidekiq are also latency sensitive: the urgent shards process jobs that users might be waiting for. If these jobs slow down, this might manifest in Git pushes not showing up in the interface, CI jobs not starting, etc.

This CR aims to deploy a separate connection pool in Sidekiq pgbouncers for latency-sensitive workloads (i.e. urgent-* shards). The aim is that if the non-urgent shards are extremely busy, they can only use up the pgbouncer connections for the non-urgent shards, which will hopefully improve the user-experience by giving latency-sensitive Sidekiq workloads more pgbouncer connections to finish their work in a timely manner.

We have successfully rolled out this change to staging last week with no abnormalities reported.

image

Diagram

NOTE: This diagram is accurate at the time of the CR and current pool sizes likely do not match what's on the diagram.

Issue: scalability#1682

Change Details

  1. Services Impacted - ServiceSidekiq
  2. Change Technician - @gsgl
  3. Change Reviewer - @schin1
  4. Time tracking - 420-720 mins (depending on amount of rollback)
  5. Downtime Component - none
  6. Deployment block - up to 5 hours on 2024-10-14 (01:30 UTC up to 06:30 UTC)

Set Maintenance Mode in GitLab

If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 300

  • Set label changein-progress /label ~change::in-progress
  • Ensure there's no ongoing deployments and deployments are stopped (@release-managers)
  • Merge chef-repo MR to add the new urgent database to read/write pgbouncers with desired pool sizes
  • Wait ~30mins so that the MR gets rolled out to all pgbouncers
    • This query shows the database pool sizes for the current & new release. Once rolled out, you should see the following numbers in the URGENT column as defined in https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/5157. Currently:
      • There are 10x pgbouncers * 9x patroni-main* hosts, and we defined the pool size as 10 so 10*9*10 = 900
      • There are 6x pgbouncers * 5x patroni-ci* hosts, and we defined the pool size as 10 so 10*5*6 = 300
      • There are 3x pgbouncer-sidekiq-0* (main) hosts, and we defined the pool size as 30 so 30*3 = 90
      • There are 3x pgbouncer-sidekiq-ci-* (ci) hosts, and we defined the pool size as 35 so 35*3 = 105
  • Merge MR and monitor the deployment that adds the urgent release to production with min/max replicas for urgent-other and urgent-cpu-bound to set to 1
    • The only other urgent shard is urgent-authorized-projects and that can be set to the desired value as it's a low-traffic shard.
  • Ensure new pods are processing work: https://dashboards.gitlab.net/goto/SErWMUzNR?orgId=1
    • You should see some activity for release urgent in shards urgent-cpu-bound and urgent-other, but maybe not urgent-authorized-projects as it's a low RPS shard
  • Now we start shuffling min/max values in the current & new release to increase the number of pods in the new release and decrease the pods in the old release
    • Take note of the max replicas value for urgent-cpu-bound and urgent-other. Currently:
      • urgent-cpu-bound: min: 50 / max: 300
      • urgent-other: min: 100 / max: 255
    • File/review/merge an MR to update:
      • current release (gitlab):
        • urgent-cpu-bound: max: 300 -> 150
        • urgent-other: min: 100 -> 50, max: 255 -> 120
      • new release (urgent):
        • urgent-cpu-bound: min: 1 -> 50, max: 1 -> 150
        • urgent-other: min: 1 -> 100, max: 1 -> 135
    • File/review/merge an MR to update:
      • current release (gitlab):
        • urgent-cpu-bound: min: 50 -> 1, max: 150 -> 1
        • urgent-other: min: 50 -> 1, max: 120 -> 1
        • urgent-authorized-projects: min: 12 -> 1, max: 26 -> 1
      • new release (urgent):
        • urgent-cpu-bound: max: 150 -> 300
        • urgent-other: max: 135 -> 255
  • Merge MR to remove urgent-cpu-bound, urgent-other and urgent-authorized-projects the current "gitlab" release
  • Check that traffic to the gitlabhq_production_sidekiq database has been significantly reduced: https://dashboards.gitlab.net/goto/1l40M8kHR?orgId=1
  • Notify @release-managers that deployments can be re-enabled
  • Reduce gitlabhq_production_sidekiq pool sizes:
    • patroni-main* pgbouncers pool size should be reduced to 5
    • patroni-ci* pgbouncers pool size should be reduced to 5
    • pgbouncer-sidekiq-0* (main) pool size should be reduced to 15
    • pgbouncer-sidekiq-ci-* (ci) pool size should be reduced to 15
  • Ensure graphs are now showing metrics for gitlabhq_production_sidekiq_urgent:
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 60-300

This CR is a little tricky to revert as it depends at which point we decided to rollback. In order of importance:

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary.

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Gonzalo Servat