2022-09-21: [GPRD] Use turbo mode on restore command

Production Change

Change Summary

During the incident a replica fell behind the leader and when we needed to have it catch up, the WAL-G --turbo flag was required to speed up the restore of WAL (transaction log) files from the WAL-G GCS archive location as seen here. This change request seeks to add the --turbo flag to scripts that perform a wal-g wal-fetch or wal-g backup-fetch as this is not always common knowledge for the EOC. WAL-G's --turbo flag is available to "Ignore all kinds of throttling defined in config".

Change Details

  1. Services Impacted - ServicePatroni ServicePostgres ServicePgbouncer
  2. Change Technician - @anganga
  3. Change Reviewer - @ahmadsherif
  4. Time tracking - 120 minutes
  5. Downtime Component - none

Detailed steps for the change

Pre-Change Steps - steps to be completed before execution of the change

Estimated Time to Complete (mins) - 20 minutes

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 120 minutes

  • Disable chef on the chef VMs
    • knife ssh 'roles:gprd-base-db-patroni-*' -- "chef-client-disable \"CR #7776\""
  • Merge the MR above
  • Roll out the change manually
    • Take the replica out of Rails load-balancing:
      • ssh patroni-ci-{1-10}-db-gprd.c.gitlab-production.internal replace {1-10} with the replica number
      • Run sudo gitlab-patronictl list | grep $(hostname -I) | grep Leader && exit 1 to eliminate the possibility of us running the procedure on a primary.
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -enable -service=db-replica$i -reason="CR #7776"; done
    • Wait until all clients have been disconnected from the replica:
      • while true; do for c in /usr/local/bin/pgb-console*; do sudo $c -c 'SHOW CLIENTS;'; done | grep gitlabhq_production | cut -d '|' -f 2 | awk '{$1=$1};1' | grep -v gitlab-monitor | wc -l; sleep 5; done
    • Wait until the output is zero
    • knife ssh 'name:patroni-ci-{1-10}-db-gprd.c.gitlab-production.internal' -- 'chef-client-enable' replace {1-10} with the replica number
    • knife ssh 'name:patroni-ci-{1-10}-db-gprd.c.gitlab-production.internal' -- 'sudo chef-client' replace {1-10} with the replica number
    • Add the replica to Rails
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -disable -service=db-replica$i; done

    • Take the replica out of Rails load-balancing:
      • ssh patroni-main-2004-{1-10}-db-gprd.c.gitlab-production.internal replace {1-10} with the replica number
      • Run sudo gitlab-patronictl list | grep $(hostname -I) | grep Leader && exit 1 to eliminate the possibility of us running the procedure on a primary.
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -enable -service=db-replica$i -reason="CR #7776"; done
    • Wait until all clients have been disconnected from the replica:
      • while true; do for c in /usr/local/bin/pgb-console*; do sudo $c -c 'SHOW CLIENTS;'; done | grep gitlabhq_production | cut -d '|' -f 2 | awk '{$1=$1};1' | grep -v gitlab-monitor | wc -l; sleep 5; done
    • Wait until the output is zero
    • knife ssh 'name:patroni-main-2004-{1-10}-db-gprd.c.gitlab-production.internal' -- 'chef-client-enable' replace {1-10} with the replica number
    • knife ssh 'name:patroni-main-2004-{1-10}-db-gprd.c.gitlab-production.internal' -- 'sudo chef-client' replace {1-10} with the replica number
    • Add the replica to Rails
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -disable -service=db-replica$i; done

    • Take the replica out of Rails load-balancing:
      • ssh patroni-ci-2004-{1-10}-db-gprd.c.gitlab-production.internal replace {1-10} with the replica number
      • Run sudo gitlab-patronictl list | grep $(hostname -I) | grep Leader && exit 1 to eliminate the possibility of us running the procedure on a primary.
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -enable -service=db-replica$i -reason="CR #7776"; done
    • Wait until all clients have been disconnected from the replica:
      • while true; do for c in /usr/local/bin/pgb-console*; do sudo $c -c 'SHOW CLIENTS;'; done | grep gitlabhq_production | cut -d '|' -f 2 | awk '{$1=$1};1' | grep -v gitlab-monitor | wc -l; sleep 5; done
    • Wait until the output is zero
    • knife ssh 'name:patroni-ci-2004-{1-10}-db-gprd.c.gitlab-production.internal' -- 'chef-client-enable' replace {1-10} with the replica number
    • knife ssh 'name:patroni-ci-2004-{1-10}-db-gprd.c.gitlab-production.internal' -- 'sudo chef-client' replace {1-10} with the replica number
    • Add the replica to Rails
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -disable -service=db-replica$i; done

    • Take the replica out of Rails load-balancing:
      • ssh patroni-v12-{1-10}-db-gprd.c.gitlab-production.internal replace {1-10} with the replica number
      • Run sudo gitlab-patronictl list | grep $(hostname -I) | grep Leader && exit 1 to eliminate the possibility of us running the procedure on a primary.
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -enable -service=db-replica$i -reason="CR #7776"; done
    • Wait until all clients have been disconnected from the replica:
      • while true; do for c in /usr/local/bin/pgb-console*; do sudo $c -c 'SHOW CLIENTS;'; done | grep gitlabhq_production | cut -d '|' -f 2 | awk '{$1=$1};1' | grep -v gitlab-monitor | wc -l; sleep 5; done
    • Wait until the output is zero
    • knife ssh 'name:patroni-v12-{1-10}-db-gprd.c.gitlab-production.internal' -- 'chef-client-enable' replace {1-10} with the replica number
    • knife ssh 'name:patroni-v12-{1-10}-db-gprd.c.gitlab-production.internal' -- 'sudo chef-client' replace {1-10} with the replica number
    • Add the replica to Rails
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -disable -service=db-replica$i; done

    • Take the replica out of Rails load-balancing:
      • ssh patroni-v12-registry-{1-10}-db-gprd.c.gitlab-production.internal replace {1-10} with the replica number
      • Run sudo gitlab-patronictl list | grep $(hostname -I) | grep Leader && exit 1 to eliminate the possibility of us running the procedure on a primary.
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -enable -service=db-replica$i -reason="CR #7776"; done
    • Wait until all clients have been disconnected from the replica:
      • while true; do for c in /usr/local/bin/pgb-console*; do sudo $c -c 'SHOW CLIENTS;'; done | grep gitlabhq_production | cut -d '|' -f 2 | awk '{$1=$1};1' | grep -v gitlab-monitor | wc -l; sleep 5; done
    • Wait until the output is zero
    • knife ssh 'name:patroni-v12-registry-{1-10}-db-gprd.c.gitlab-production.internal' -- 'chef-client-enable' replace {1-10} with the replica number
    • knife ssh 'name:patroni-v12-registry-{1-10}-db-gprd.c.gitlab-production.internal' -- 'sudo chef-client' replace {1-10} with the replica number
    • Add the replica to Rails
      • a=("" "-1" "-2"); for i in "${a[@]}"; do consul maint -disable -service=db-replica$i; done

  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 60 minutes

  • Revert the MR:
  • Set label changeaborted /label ~change::aborted
  • Force Chef to run in all affected nodes:
    • for node in $(knife search 'roles:gprd-base-db-patroni-*' -i); do knife ssh "name:$node" -- 'sudo chef-client'; done
    • for node in $(knife search 'roles:gprd-base-db-postgres-replication' -i); do knife ssh "name:$node" -- 'sudo chef-client'; done

Monitoring

Key metrics to observe

  • Metric: Metric: PostgreSQL Replication Overview
  • Check postgres and wal-g push logs to check for errors
    • Location: /var/log/gitlab/postgresql/postgresql.csv and /var/log/wal-g/wal-g.log
    • What changes should prompt a rollback: errors related with wal-g failing should be evaluated to check if they are related with the new option

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Maina Ng'ang'a