2022-10-18: [GPRD] Use turbo mode on restore command DR nodes
Production Change
Change Summary
During the incident a replica fell behind the leader and when we needed to have it catch up, the WAL-G --turbo flag was required to speed up the restore of WAL (transaction log) files from the WAL-G GCS archive location as seen here. This change request seeks to add the --turbo flag to scripts that perform a wal-g wal-fetch or wal-g backup-fetch as this is not always common knowledge for the EOC. WAL-G's --turbo flag is available to "Ignore all kinds of throttling defined in config".
Pre-Change Steps - steps to be completed before execution of the change
-
Run knife search 'roles:gprd-base-db-postgres-replication' -ito list the replication nodes
Change Details
- Services Impacted - ServicePostgres
-
Change Technician -
@anganga - Change Reviewer - @ahmadsherif
- Time tracking - 60 minutes
- Downtime Component - none
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Set label changein-progress /label ~change::in-progress -
Merge the MR
Run the commands below in each node
ssh postgres-ci-dr-archive-2004-01-db-gprd.c.gitlab-production.internal
sudo gitlab-ctl hup postgresql
ssh postgres-ci-dr-delayed-2004-01-db-gprd.c.gitlab-production.internal
sudo gitlab-ctl hup postgresql
ssh postgres-dr-main-archive-2004-01-db-gprd.c.gitlab-production.internal
sudo gitlab-ctl hup postgresql
ssh postgres-dr-main-delayed-2004-01-db-gprd.c.gitlab-production.internal
sudo gitlab-ctl hup postgresql
ssh postgres-registry-dr-archive-01-db-gprd.c.gitlab
sudo gitlab-ctl hup postgresql
ssh postgres-registry-dr-delayed-01-db-gprd.c.gitlab
sudo gitlab-ctl hup postgresql
ssh postgres-ci-dr-archive-01-db-gprd.c.gitlab-production.internal
sudo gitlab-ctl hup postgresql
ssh postgres-ci-dr-delayed-01-db-gprd.c.gitlab-production.internal
sudo gitlab-ctl hup postgresql
ssh postgres-dr-delayed-01-db-gprd.c.gitlab-production.internal
sudo gitlab-ctl hup postgresql
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Revert the MR: -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Metric: PostgreSQL Replication Overview
- Location: https://dashboards.gitlab.net/goto/w4wTox74k?orgId=1 Hide charts
- What changes to this metric should prompt a rollback: If there is a drop in the number of replicas
- Check postgres and wal-g push logs to check for errors
- Location:
/var/log/gitlab/postgresql/postgresql.csvand/var/log/wal-g/wal-g.log - What changes should prompt a rollback: errors related with wal-g failing should be evaluated to check if they are related with the new option
- Location:
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Maina Ng'ang'a