[GPRD] - Further increase the number of concurrently archived WAL files to mitigate pileup (15 => 20)

Production Change

Change Summary

As a follow-up to #6599 (closed), we are further increasing the throughput for uploading archived WAL files to object storage.

Motivation

Recently during the weekday workload peaks, Postgres generates transaction logs a little faster than they can be archived to object storage. This demand versus capacity gap causes a backlog of transaction log files (WAL files) to temporarily accumulate for several hours.

Accumulating such a backlog carries some hidden risks, the most severe of which is that it silently impedes the rapid failover mechanism for promoting a different node to become the primary db. So if such a failover becomes necessary while the archiver has a backlog, this could lead to a prolonged outage.

For a background summary, see: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15362#note_877494437

There are several options for correcting this saturation problem. This tuning adjustment is the simplest and quickest of those options. Additional proposals are being actively discussed in https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15362. For quick reference, here is yesterday's list of mitigation ideas: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15362#note_876344687

Change Details

  1. Services Impacted - ServicePostgres
  2. Change Technician - @msmiley
  3. Change Reviewer - @cmcfarland
  4. Time tracking - 30 minutes
  5. Downtime Component - none

Detailed steps for the change

Pre-Change Steps - steps to be completed before execution of the change

Estimated Time to Complete (mins) - 0 minutes

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 15 minutes

Post-Change Steps - steps to take to verify the change

Estimated Time to Complete (mins) - 15 minutes

  • Observe the metrics for 15 minutes

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 10 minutes

Monitoring

Key metrics to observe

Summary of infrastructure changes

  • Does this change introduce new compute instances?
  • Does this change re-size any existing compute instances?
  • Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc?

Summary of the above

Change Reviewer checklist

C4 C3 C2 C1:

  • The scheduled day and time of execution of the change is appropriate.
  • The change plan is technically accurate.
  • The change plan includes estimated timing values based on previous testing.
  • The change plan includes a viable rollback plan.
  • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
  • The change plan includes success measures for all steps/milestones during the execution.
  • The change adequately minimizes risk within the environment/service.
  • The performance implications of executing the change are well-understood and documented.
  • The specified metrics/monitoring dashboards provide sufficient visibility for the change. - If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
  • The change has a primary and secondary SRE with knowledge of the details available during the change window.

Change Technician checklist

  • This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities.
  • This issue has the change technician as the assignee.
  • Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed.
  • This Change Issue is linked to the appropriate Issue and/or Epic
  • Necessary approvals have been completed based on the Change Management Workflow.
  • Change has been tested in staging and results noted in a comment on this issue.
  • A dry-run has been conducted and results noted in a comment on this issue.
  • SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
  • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
  • There are currently no active incidents.
Edited by Matt Smiley