2023-09-11: [GPRD] Rebuild delayed and archive replicas for main-v14 cluster

Production Change

Change Summary

After the scheduled main Upgrade PostgreSQL to PG14, we will need to build gprd main DR archive and delayed replicas with PG14.

Change Details

  1. Services Impacted - ServicePatroni
  2. Change Technician - @anganga
  3. Change Reviewer - @alexander-sosna
  4. Time tracking - 180 minutes
  5. Downtime Component - none

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes

  • Set label changein-progress /label ~change::in-progress
  • Create the necessary silences
  • Create the necessary chef-roles by merging the MR below
  • Create the postgres-main-v14-dr-archive and postgres-main-v14-dr-delayed DR cluster by merging the MR below
    • Rebase the MR: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/6663
    • Review the MR to ensure no unexpected changes.
    • Add a comment to verify that only the desired changes are being applied.
    • Merge the MR
    • Wait for the nodes to be provisioned
       knife search "role:gprd-base-db-patroni-main-archive-v14 OR role:gprd-base-db-patroni-main-delayed-v14" -i
       2 items found
      
       postgres-dr-main-v14-delayed-01-db-gprd.c.gitlab-production.internal
       postgres-dr-main-v14-archive-01-db-gprd.c.gitlab-production.internal
    • Wait for chef to converge follow the serial output gcloud compute --project=gitlab-production instances tail-serial-port-output postgres-dr-main-v14-archive-01-db-gprd --zone=us-east1-c --port=1 for archive and gcloud compute --project=gitlab-production instances tail-serial-port-output postgres-dr-main-v14-delayed-01-db-gprd --zone=us-east1-c --port=1
  • Initialize the patroni DR replica. Log into each of the node listed and execute the following steps:
    • For postgres-main-dr-archive-v14-01-db-gprd.c.gitlab-production.internal
    • echo service/$(/usr/local/bin/gitlab-patronictl list -f json | jq -r '.[0].Cluster')
    • consul kv delete -recurse service/gprd-patroni-main-archive
    • sudo rm -rf /var/opt/gitlab/postgresql/data14
    • sudo rm -rf /var/opt/gitlab/postgresql/data12
    • sudo systemctl start patroni
    • sudo chef-client
    • sudo chef-client-disable 'https://gitlab.com/gitlab-com/gl-infra/production/-/issues/16316'
    • systemctl stop td-agent
    • sudo gitlab-patronictl list should show Replica | creating replica
    • When base_backup is restored sudo gitlab-patronictl list should show Standby Leader | running
    • use tail /var/log/gitlab/postgresql/postgresql.csv get current WAL file and look up how old it is to estimate remaining restore time, gitlab-gprd-postgres-backup/pitr-walg-main-v14/wal_005
    • sudo chef-client-enable
    • systemctl start td-agent
    • For postgres-main-dr-delayed-v14-01-db-gprd.c.gitlab-production.internal
    • echo service/$(/usr/local/bin/gitlab-patronictl list -f json | jq -r '.[0].Cluster')
    • consul kv delete -recurse service/gprd-patroni-main-delayed
    • sudo rm -rf /var/opt/gitlab/postgresql/data14
    • sudo rm -rf /var/opt/gitlab/postgresql/data12
    • sudo systemctl start patroni
    • sudo chef-client
    • sudo gitlab-patronictl list should show Replica | creating replica
    • When base_backup is restored sudo gitlab-patronictl list should show Standby Leader | running
    • use tail /var/log/gitlab/postgresql/postgresql.csv get current WAL file and look up how old it is to estimate remaining restore time, gitlab-gprd-postgres-backup/pitr-walg-main-v14/wal_005
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 30 minutes

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary.

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Maina Ng'ang'a