2022-09-19 00:00 UTC: Reboot Patroni Main Cluster in GPRD

Production Change

Change Summary

We need to reboot the entire cluster to get /var/log properly mounted. The CI cluster seems OK so it's only main that needs this.

More info: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16292

Change Details

  1. Services Impacted - ServicePatroni ServicePostgres
  2. Change Technician - @gsgl
  3. Change Reviewer - @rhenchen.gitlab
  4. Time tracking - 120 minutes
  5. Downtime Component - no total downtime -- expected ~90s period during switchover where writes will fail. Read queries queries will be drained as much as possible, but long running queries will get terminated after ~40min. CI cluster is not affected by this change.

Detailed steps for the change

Change Steps - steps to take to execute the change

Prep

  1. SSH to gprd-console

  2. Ensure the db-ops repo is checked out:

    cd ~/src
    rm -rf db-ops
    git clone git@ops.gitlab.net:gitlab-com/gl-infra/db-ops.git
  3. Create virtualenv and install Ansible:

    cd
    export PATH="$HOME/.local/bin:$PATH"  # Add to .bash_profile
    pip install --user pipenv
    cd src/db-ops
    pipenv sync
    pipenv shell
    tmux
  4. Create inventory file in ~/src/db-ops/ansible/reboot-patroni-cluster/inventory/gprd-patroni-main.yml:

    all:
      children:
        test:
          hosts:
            patroni-main-2004-10-db-gprd.c.gitlab-production.internal:
        rest:
          hosts:
            patroni-main-2004-[01:09]-db-gprd.c.gitlab-production.internal:
  5. Ensure you can reach all hosts:

    cd ~/src/db-ops/ansible/reboot-patroni-cluster
    ansible -i inventory/gprd-patroni-main.yml -m ping all

The remaining steps in this CR assume you've completed the above steps.

Disable full WAL-G Backup (must be completed before 00:00 UTC)

  1. Full WAL-G backups take place at 00:00 UTC every day on a replica. These backups taken 10-11hrs to complete. We need to disable this job prior to 00:00 UTC by running a playbook:

    cd ~/src/db-ops/ansible/reboot-patroni-cluster
    ansible-playbook -i inventory/gprd-patroni-main.yml disable_walg_backup.yml

Process

  1. Set label changein-progress /label ~change::in-progress

  2. Add silences at https://alerts.gitlab.net:

    • Silence 1 - Matchers:

      • alert_type=cause
      • env=gprd
      • environment=gprd
      • job=walg-basebackup
      • pager=pagerduty
      • resource=walg-basebackup
      • severity=s1
      • tier=db
      • type=patroni
      • alertname=~WALGBaseBackupFailed
    • Silence 2 - Matchers:

      • alert_type=cause
      • env=gprd
      • environment=gprd
      • job=postgres
      • monitor=db
      • pager=pagerduty
      • severity=s1
      • type=patroni
      • alertname=~PostgreSQL_ServiceDown
      • fqdn=~patroni-main-2004-.*
  3. Check that /var/log/syslog does not exist:

    ansible -i inventory/gprd-patroni-main.yml -o -m shell -a "ls -al /var/log/syslog" test

    Output should be RED.

  4. Run playbook to reboot ONE replica:

    cd ~/src/db-ops/ansible/reboot-patroni-cluster
    ansible-playbook -i inventory/gprd-patroni-main.yml reboot_cluster.yml -t replicas -l test
  5. Check that /var/log/syslog now exists:

    ansible -i inventory/gprd-patroni-main.yml -o -m shell -a "ls -al /var/log/syslog" test

    Output should be YELLOW.

  6. Run playbook to reboot the rest of the replicas:

    cd ~/src/db-ops/ansible/reboot-patroni-cluster
    ansible-playbook -i inventory/gprd-patroni-main.yml reboot_cluster.yml -t replicas -l rest
  7. Run playbook to switchover the leader and reboot it:

    cd ~/src/db-ops/ansible/reboot-patroni-cluster
    ansible-playbook -i inventory/gprd-patroni-main.yml reboot_cluster.yml -t leaders
  8. Initiate a full WAL-G backup:

    1. SSH to patroni-main-2004-10-db-gprd.c.gitlab-production.internal

    2. If /opt/wal-g/bin/backup.sh hasn't been restored by chef yet, run chef-client:

      sudo -i
      chef-client
    3. Open a tmux session:

      tmux new -s walgbackup
    4. Run backup script:

      su - gitlab-psql
      /opt/wal-g/bin/backup.sh >> /var/log/wal-g/wal-g_backup_push.log 2>&1
  9. Check that DR archive & delayed 20.04 servers have followed the timeline switch and do not lag behind:

    1. Check that the timeline matches main:

      sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_controldata -D /var/opt/gitlab/postgresql/data | grep -i Timeline
    2. Check replication lag in Thanos:

      • ARCHIVE - should have very little lag (less than 100): Thanos
      • DELAYED - should be around 28800 (8 hours): Thanos
  10. Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes

Throughout the process, we need to keep a close eye on replication lag either in thanos and/or using gitlab-patronictl list -W.

If any node gets widely out of sync, we'll need to stop running the playbook and investigate/remediate before moving to the next one. This should not happen as the playbook has steps to ensure replication lag is within acceptable levels.

Monitoring

Key metrics to observe

  • Metric: web service apdex & error ratios
    • Location: https://dashboards.gitlab.net/d/web-main/web-overview?orgId=1
    • What changes to this metric should prompt a rollback: we expect some requests to fail throughout this change as we need to reboot every replica & leader and some queries will be forcefully terminated. If we find apdex and error ratio to be below the outage line, then we may need to stop the change at that point.

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Gonzalo Servat