2024-10-14: Migrate latency-sensitive Sidekiq workloads to dedicated urgent pgbouncer pools
Production Change
Change Summary
Currently, we only have separate connection pools for Puma and for Sidekiq. All of the Sidekiq pods (irrespective of their urgency) use the same Sidekiq pgbouncer pools. This means that if we have a lot of jobs to process in a shard with high concurrency, it could easily use up a lot of pgbouncer connections and starve shards with lower concurrency, which leads to incidents like #6880 (closed).
But some of the workloads in Sidekiq are also latency sensitive: the urgent shards process jobs that users might be waiting for. If these jobs slow down, this might manifest in Git pushes not showing up in the interface, CI jobs not starting, etc.
This CR aims to deploy a separate connection pool in Sidekiq pgbouncers for latency-sensitive workloads (i.e. urgent-* shards). The aim is that if the non-urgent shards are extremely busy, they can only use up the pgbouncer connections for the non-urgent shards, which will hopefully improve the user-experience by giving latency-sensitive Sidekiq workloads more pgbouncer connections to finish their work in a timely manner.
We have successfully rolled out this change to staging last week with no abnormalities reported.
NOTE: This diagram is accurate at the time of the CR and current pool sizes likely do not match what's on the diagram.
Issue: scalability#1682
Change Details
- Services Impacted - ServiceSidekiq
- Change Technician - @gsgl
- Change Reviewer - @schin1
- Time tracking - 420-720 mins (depending on amount of rollback)
- Downtime Component - none
-
Deployment block - up to 5 hours on
2024-10-14(01:30 UTCup to06:30 UTC)
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 300
-
Set label changein-progress /label ~change::in-progress -
Ensure there's no ongoing deployments and deployments are stopped ( @release-managers) -
Merge chef-repo MR to add the new urgent database to read/write pgbouncers with desired pool sizes -
Wait ~30mins so that the MR gets rolled out to all pgbouncers -
This query shows the database pool sizes for the current & new release. Once rolled out, you should see the following numbers in the URGENTcolumn as defined in https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/5157. Currently:- There are 10x pgbouncers * 9x
patroni-main*hosts, and we defined the pool size as10so10*9*10=900 - There are 6x pgbouncers * 5x
patroni-ci*hosts, and we defined the pool size as10so10*5*6=300 - There are 3x
pgbouncer-sidekiq-0*(main) hosts, and we defined the pool size as30so30*3=90 - There are 3x
pgbouncer-sidekiq-ci-*(ci) hosts, and we defined the pool size as35so35*3=105
- There are 10x pgbouncers * 9x
-
-
Merge MR and monitor the deployment that adds the urgentrelease to production with min/max replicas forurgent-otherandurgent-cpu-boundto set to1- The only other
urgentshard isurgent-authorized-projectsand that can be set to the desired value as it's a low-traffic shard.
- The only other
-
Ensure new pods are processing work: https://dashboards.gitlab.net/goto/SErWMUzNR?orgId=1 - You should see some activity for release
urgentin shardsurgent-cpu-boundandurgent-other, but maybe noturgent-authorized-projectsas it's a low RPS shard
- You should see some activity for release
- Now we start shuffling min/max values in the current & new release to increase the number of pods in the new release and decrease the pods in the old release
-
Take note of the max replicas value for urgent-cpu-bound and urgent-other. Currently: -
urgent-cpu-bound: min: 50 / max: 300 -
urgent-other: min: 100 / max: 255
-
-
File/review/merge an MR to update: - current release (
gitlab):-
urgent-cpu-bound: max: 300 -> 150 -
urgent-other: min: 100 -> 50, max: 255 -> 120
-
- new release (
urgent):-
urgent-cpu-bound: min: 1 -> 50, max: 1 -> 150 -
urgent-other: min: 1 -> 100, max: 1 -> 135
-
- current release (
-
File/review/merge an MR to update: - current release (
gitlab):-
urgent-cpu-bound: min: 50 -> 1, max: 150 -> 1 -
urgent-other: min: 50 -> 1, max: 120 -> 1 -
urgent-authorized-projects: min: 12 -> 1, max: 26 -> 1
-
- new release (
urgent):-
urgent-cpu-bound: max: 150 -> 300 -
urgent-other: max: 135 -> 255
-
- current release (
-
-
Merge MR to remove urgent-cpu-bound,urgent-otherandurgent-authorized-projectsthe current "gitlab" release -
Check that traffic to the gitlabhq_production_sidekiqdatabase has been significantly reduced: https://dashboards.gitlab.net/goto/1l40M8kHR?orgId=1 -
Notify @release-managersthat deployments can be re-enabled -
Reduce gitlabhq_production_sidekiqpool sizes:-
patroni-main*pgbouncers pool size should be reduced to5 -
patroni-ci*pgbouncers pool size should be reduced to5 -
pgbouncer-sidekiq-0*(main) pool size should be reduced to15 -
pgbouncer-sidekiq-ci-*(ci) pool size should be reduced to15
-
-
Ensure graphs are now showing metrics for gitlabhq_production_sidekiq_urgent: -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 60-300
This CR is a little tricky to revert as it depends at which point we decided to rollback. In order of importance:
-
If the gitlabrelease has been reduced (min/max replicas), then revert those changes first. Careful here not to overwhelm the database with load if the newurgentrelease is processing urgent workloads (hopefully this won't happen as the CR is meant to be executed during off-peak times).- You'll probably want to do the inverse of the rollout to slowly shift the min/max replicas in the opposite direction (
urgenttogitlab).
- You'll probably want to do the inverse of the rollout to slowly shift the min/max replicas in the opposite direction (
-
If the urgentrelease has been deployed, then you'll want to remove theurgentrelease by removing theadditional_sidekiq_shardskey from https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/blob/master/bases/gprd.yaml. -
Finally, revert https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/5157 to remove the gitlabhq_production_sidekiq_urgentpgbouncer pools. -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: patroni's pg_active_db_connections_primary component saturation
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni3a-overview?orgId=1&viewPanel=1217942947
- What are we looking for? Server backends will likely increase during the migration, but once we reach the end of the CR, we should be roughly at about the same level that we were before the CR started.
-
Metric: patroni-ci's pg_active_db_connections_primary component saturation
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci3a-overview?orgId=1&viewPanel=1217942947
- What are we looking for? Server backends will likely increase during the migration, but once we reach the end of the CR, we should be roughly at about the same level that we were before the CR started.
-
Metric: all graphs
- Location: https://dashboards.gitlab.net/goto/8tNzS8kHR?orgId=1
- What are we looking for? Ensure graphs look healthy (no noticeable/concerning spikes or drops) and we don't have growing queues or drop in throughput.
-
Metric: patroni main leader
node_cpu_utilization- Location: https://dashboards.gitlab.net/goto/HhCMmCkNR?orgId=1
- What are we looking for? No abnormal jumps in CPU utilization.
-
Metric: patroni main leader
node_load1/node_cpu_count- Location: https://dashboards.gitlab.net/goto/SZU2mjzNg?orgId=1
- What are we looking for? No abnormal jumps in load.
-
Metric: patroni CI leader
node_cpu_utilization- Location: https://dashboards.gitlab.net/goto/oUmJmjkHg?orgId=1
- What are we looking for? No abnormal jumps in CPU utilization.
-
Metric: patroni CI leader
node_load1/node_cpu_count- Location: https://dashboards.gitlab.net/goto/Dh9CmCkHR?orgId=1
- What are we looking for? No abnormal jumps in load.
-
Metric: pgbouncer pools saturation on writers (main)
- Location: https://dashboards.gitlab.net/goto/ZdnHISmHg?orgId=1
- What are we looking for? Ensuring that the changes to the DB pool sizes aren't showing signs of saturation.
-
Metric: pgbouncer pools saturation on writers (CI)
- Location: https://dashboards.gitlab.net/goto/EcU0SSiNg?orgId=1
- What are we looking for? Ensuring that the changes to the DB pool sizes aren't showing signs of saturation.
-
Metric: pgbouncer pools saturation on readers (main)
- Location: https://dashboards.gitlab.net/goto/5NjtISiHR?orgId=1
- What are we looking for? Ensuring that the changes to the DB pool sizes aren't showing signs of saturation.
-
Metric: pgbouncer pools saturation on readers (CI)
- Location: https://dashboards.gitlab.net/goto/iX4oISiHR?orgId=1
- What are we looking for? Ensuring that the changes to the DB pool sizes aren't showing signs of saturation.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
