[GPRD] - Further increase the number of concurrently archived WAL files to mitigate pileup (15 => 20)
Production Change
Change Summary
As a follow-up to #6599 (closed), we are further increasing the throughput for uploading archived WAL files to object storage.
Motivation
Recently during the weekday workload peaks, Postgres generates transaction logs a little faster than they can be archived to object storage. This demand versus capacity gap causes a backlog of transaction log files (WAL files) to temporarily accumulate for several hours.
Accumulating such a backlog carries some hidden risks, the most severe of which is that it silently impedes the rapid failover mechanism for promoting a different node to become the primary db. So if such a failover becomes necessary while the archiver has a backlog, this could lead to a prolonged outage.
For a background summary, see: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15362#note_877494437
There are several options for correcting this saturation problem. This tuning adjustment is the simplest and quickest of those options. Additional proposals are being actively discussed in https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15362. For quick reference, here is yesterday's list of mitigation ideas: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15362#note_876344687
Change Details
- Services Impacted - ServicePostgres
- Change Technician - @msmiley
- Change Reviewer - @cmcfarland
- Time tracking - 30 minutes
- Downtime Component - none
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Estimated Time to Complete (mins) - 0 minutes
-
Set label changein-progress on this issue
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 15 minutes
-
Merge and apply Chef MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/1540 -
Apply changes to the primary db (currently patroni-v12-05):sudo chef-client. The config change takes effect immediately.
Post-Change Steps - steps to take to verify the change
Estimated Time to Complete (mins) - 15 minutes
-
Observe the metrics for 15 minutes
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 10 minutes
-
Revert the Chef MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/1540 -
Apply Chef changes on primary db host via sudo chef-client. The config change takes effect immediately.
Monitoring
Key metrics to observe
-
Metric: pg_archiver_pending_wal_count
- Location: https://prometheus-db.gprd.gitlab.net/graph?g0.expr=pg_archiver_pending_wal_count%20%3E%202000&g0.tab=0&g0.stacked=0&g0.show_exemplars=0&g0.range_input=6h
- What changes to this metric should prompt a rollback: If this metric increases significantly after the change for >10 minutes we need to check if the archiving is no longer working. If archiving stopped or is slowed down a rollback will be executed.
-
Metric: Network Traffic Received (archive bucket)
- Location: https://console.cloud.google.com/monitoring/dashboards/resourceDetail/gcs_bucket,project_id:gitlab-production,location:us,bucket_name:gitlab-gprd-postgres-backup,storage_class:MULTI_REGIONAL?project=gitlab-production&timeDomain=1d
- What changes to this metric should prompt a rollback: If the traffic stops or breaks in while
pg_archiver_pending_wal_countis still high and increasing.
-
Metric: Node CPU
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&viewPanel=60
- What changes to this metric should prompt a rollback: significant increase in CPU utilization
Summary of infrastructure changes
-
Does this change introduce new compute instances? -
Does this change re-size any existing compute instances? -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc?
Summary of the above
Change Reviewer checklist
-
The scheduled day and time of execution of the change is appropriate. -
The change plan is technically accurate. -
The change plan includes estimated timing values based on previous testing. -
The change plan includes a viable rollback plan. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details). -
The change plan includes success measures for all steps/milestones during the execution. -
The change adequately minimizes risk within the environment/service. -
The performance implications of executing the change are well-understood and documented. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change. - If not, is it possible (or necessary) to make changes to observability platforms for added visibility? -
The change has a primary and secondary SRE with knowledge of the details available during the change window.
Change Technician checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncalland this issue and await their acknowledgement.) -
Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managersand this issue and await their acknowledgment.) -
There are currently no active incidents.