[08/23/2022 - 20:00 UTC] - GPRD - Set max_worker_processes=30 for all the GPRD DR (archive and delayed) replicas
Production Change
Change Summary
During the 1st dry-run of gstg OS upgrade rollout (from16.04 to 20.04) and rollback (from20.04 to 16.04) all the archive and delayed DR replicas stop applying the WALs and become out of sync. We have successfully implemented a solution in gstg - MR by setting max_worker_processes=30
During the OS upgrade our ansible playbook will increase max_worker_processes=30 to improve reindex performance. With this change, all our gprd DR (archive and delayed) replicas will continue to work fine during and after the OS upgrade.
Change Details
- Services Impacted - ServicePostgres ServicePatroni ServicePatroniCI Database
-
Change Technician -
@bshah11 - Change Reviewer - @rhenchen.gitlab
- Time tracking -
- Downtime Component - yes - gprd DR (archive and delayed) replicas will be restarted
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (4 hours)
-
Set label changein-progress /label ~change::in-progress -
Get green light from @sre-oncalland@release-managers -
Change will be performed on the following hosts -
postgres-ci-dr-delayed-2004-01-db-gprd.c.gitlab-production.internal -
postgres-ci-dr-archive-2004-01-db-gprd.c.gitlab-production.internal -
postgres-ci-dr-archive-01-db-gprd.c.gitlab-production.internal -
postgres-ci-dr-delayed-01-db-gprd.c.gitlab-production.internal -
postgres-dr-main-delayed-2004-01-db-gprd.c.gitlab-production.internal -
postgres-dr-delayed-01-db-gprd.c.gitlab-production.internal -
postgres-dr-archive-01-db-gprd.c.gitlab-production.internal -
postgres-registry-dr-delayed-01-db-gprd.c.gitlab-production.internal -
postgres-registry-dr-archive-01-db-gprd.c.gitlab-production.internal -
postgres-dr-main-archive-2004-01-db-gprd.c.gitlab-production.internal
-
-
Merge MR - https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2213 Wait for the MR pipeline to finish. -
Check max_worker_processes value. Expected value is 8 knife ssh "roles:gprd-base-db-postgres" "sudo gitlab-psql -c \"show max_worker_processes\"" -
Enable and Execute Chef Client knife ssh "roles:gprd-base-db-postgres" "sudo chef-client-enable; sudo /opt/chef/bin/chef-client;" -
Reconfigure GitLab using gitlab-ctl knife ssh "roles:gprd-base-db-postgres" "sudo gitlab-ctl reconfigure" -
Stop and start postgres knife ssh "roles:gprd-base-db-postgres" "sudo gitlab-ctl stop postgresql ; sudo gitlab-ctl start postgresql " -
Check max_worker_processes value. Expected value is 30 knife ssh "roles:gprd-base-db-postgres" "sudo gitlab-psql -c \"show max_worker_processes\"" -
Login to the hosts and check postgres log to confirm it is restoring/applying the WALs
Wrap up
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (4 hours)
-
Revert MR - https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2213 Wait for the MR pipeline to finish. -
Check max_worker_processes value. Expected value is 30 knife ssh "roles:gprd-base-db-postgres" "sudo gitlab-psql -c \"show max_worker_processes\"" -
Enable and Execute Chef Client knife ssh "roles:gprd-base-db-postgres" "sudo chef-client-enable; sudo /opt/chef/bin/chef-client;" -
Reconfigure GitLab using gitlab-ctl knife ssh "roles:gprd-base-db-postgres" "sudo gitlab-ctl reconfigure" -
Stop and start postgres knife ssh "roles:gprd-base-db-postgres" "sudo gitlab-ctl stop postgresql ; sudo gitlab-ctl start postgresql " -
Check max_worker_processes value. Expected value is 8 knife ssh "roles:gprd-base-db-postgres" "sudo gitlab-psql -c \"show max_worker_processes\"" -
Login to the hosts and check postgres log to confirm it is restoring/applying the WALs -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: Check replication lag (~10 to 20) for DR Archive
- Location: https://thanos.gitlab.net/graph?g0.expr=pg_replication_lag%7Benv%3D%22gprd%22%2C%20instance%3D~%22postgres-(ci-)%3Fdr-archive-.*%22%7D&g0.tab=0&g0.stacked=0&g0.range_input=6h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D
- What changes to this metric should prompt a rollback: If replication lag is increasing and postgres log show errors restoring WAL files
-
Metric: Check replication lag (~28k) for DR Delayed
- Location: https://thanos.gitlab.net/graph?g0.expr=pg_replication_lag%7Benv%3D%22gprd%22%2C%20instance%3D~%22postgres-(ci-)%3Fdr-delayed-.*%22%7D&g0.tab=0&g0.stacked=0&g0.range_input=6h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D
- What changes to this metric should prompt a rollback: If replication lag is increasing and postgres log show errors restoring WAL files
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.