Skip to content

[GPRD] Decrease the non-urgent sidekiq pool size on gprd sidekiq pgbouncer instances

Production Change

Change Summary

In https://gitlab.com/gitlab-com/gl-infra/production/-/issues/19984 we experienced a site-wide outage due to non-urgent sidekiq overwhelming the main primary.

The non-urgent sidekiq pool is intended to be a "bottleneck" to keep non-urgent jobs from overwhelming the primary, and as this occurred with a pool_size of 30, the decision was made to reduce the pool_size by 50% to keep sidekiq urgent + non-urgent connection count bellow 176 (which is the primary node vCPU amount) - https://gitlab.com/gitlab-com/gl-infra/production-engineering/-/issues/26920#note_2743747948

This MR drops the non-urgent sidekiq pool_size from 30 to 15 in an effort to avoid site-wide outages when non-urgent jobs get backed up.

The pool reduction is implemented in 3 steps each having 24 hours of observation between them:

  1. 1st day -> pool = 25
  2. 2nd day -> pool = 20
  3. 3st day -> pool = 15

The following is the list of pgbouncer instances that are affected by this configuration change.

pgbouncer-sidekiq-01-db-gprd.c.gitlab-production.internal
pgbouncer-sidekiq-02-db-gprd.c.gitlab-production.internal
pgbouncer-sidekiq-03-db-gprd.c.gitlab-production.internal

Risk analysis

With this reduction there's risk of Sidekiq pool saturation. As discussed at https://gitlab.com/gitlab-com/gl-infra/production-engineering/-/issues/26920#note_2743747948 saturation of non-urgent sidekiq workload should not be a problem, in fact it's intended, as long as we keep meaningful delays in Sidekiq job latency under is at a level that is not impacting sidekiq causing problems for users.

Note: we still don't have a threshold SLO for non-urgent sidekiq job latency - @reprazent might helps us with that;

Change Details

  1. Services Impacted - Database ServicePgbouncer
  2. Change Technician - @vporalla , @rhenchen.gitlab
  3. Change Reviewer - @rhenchen.gitlab, @alexander-sosna, @bprescott_
  4. Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2025-09-22 04:30
  5. Time tracking - 3 Days
  6. Downtime Component - none

Maintenance Mode in GitLab

If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.

Detailed steps for the change

Pre-execution steps

  • Make sure all tasks in Change Technician checklist are done
  • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (Search the PagerDuty schedule for "SRE 8-hour" to find who will be on-call at the scheduled day and time. SREs on-call must be informed of plannable C1 changes at least 2 weeks in advance.)
    • The SRE on-call provided approval with the eoc_approved label on the issue.
  • For C1, C2, or blocks deployments change issues, Release managers have been informed prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
  • There are currently no active incidents that are severity1 or severity2
  • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.

Change steps - steps to take to execute the change

Estimated Time to Complete (mins) - 2 hours (Each Day)

  • Set label changein-progress /label ~change::in-progress
  • 1st DAY - 2025-09-22
  • Monitor the metrics using the Grafana Dashboard for 24 hours
  • 2nd DAY - 2025-09-23
    • Update the Rollback step in the CR to revert the next MR

    • Merge the MR https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/6432

    • Run Chef on the affected instances sudo chef-client

      knife ssh "roles:gprd-base-db-pgbouncer-sidekiq" "sudo chef-client"
    • Reschedule next step to next working days (2025-09-22)

    • Open a new MR to decrease the pool by 5

  • Monitor the metrics using the Grafana Dashboard for 96 hours
  • 3rd DAY - 2025-09-24
  • Monitor the metrics using the Grafana Dashboard for 4 hours
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 10 minutes

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary.

Change Technician checklist

  • The Change Criticality has been set appropriately and requirements have been reviewed.
  • The change plan is technically accurate.
  • The rollback plan is technically accurate and detailed enough to be executed by anyone with access.
  • This Change Issue is linked to the appropriate Issue and/or Epic
  • Change has been tested in staging and results noted in a comment on this issue.
  • A dry-run has been conducted and results noted in a comment on this issue.
  • The change execution window respects the Production Change Lock periods.
  • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
  • For C1 change issues, a Senior Infrastructure Manager has provided approval with the manager_approved label on the issue.
  • For C2 change issues, an Infrastructure Manager provided approval with the manager_approved label on the issue.
  • Mention @gitlab-org/saas-platforms/inframanagers in this issue to request approval and provide visibility to all infrastructure managers.
  • For C1, C2, or blocks deployments change issues, confirm with Release managers that the change does not overlap or hinder any release process (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
Edited by Vamshidhar Poralla