2022-11-22: [GPRD] Update Consul agent on Patroni clusters and PGBouncers (4/5)

Production Change

Change Summary

We're currently running a Consul server cluster on physical VMs using a very old unsupported version of Consul (1.7.2). This CR is just one of several with the end goal being to deploy a Consul server cluster in k8s using the latest version of Consul (using the official and latest consul-k8s chart), decommission the Consul server cluster running on VMs and upgrade/migrate all Consul clients running in k8s and VMs.

We have carried out this process in staging (see CR). It was largely successful with a few lessons learnt along the way that will hopefully make the production deploy even smoother.

This CR aims to non-disruptively restart the Consul agent on the PG Bouncer hosts and the following Patroni clusters:

  • Main
  • CI
  • Registry

There are likely some requests that would not be successful during the restart of the Consul agent on the Patroni leader on each Patroni cluster.

Issue: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16268
Epic: &844 (closed)

Change Details

  1. Services Impacted - ServiceConsul ServicePatroni ServicePatroniCI
  2. Change Technician - @gsgl
  3. Change Reviewer - @f_santos
  4. Time tracking - 2h
  5. Downtime Component - none

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 120min

Main

  • Disable Chef on all Patroni nodes: knife ssh 'roles:gprd-base-db-patroni-main-2004' 'sudo chef-client-disable "Upgrading Consul - see: "https://gitlab.com/gitlab-com/gl-infra/production/-/issues/8040"
  • Put the Patroni cluster in maintenance mode to avoid failovers when cycling Consul: sudo gitlab-patronictl pause --wait gprd-patroni-main-pg12-2004
  • One node at a time, cycle Consul: knife ssh -C 1 'roles:gprd-base-db-patroni-main-2004' 'sudo systemctl restart consul; sleep 20'
  • Check consul members to ensure they're all running 1.13.3
  • Unpause the cluster: sudo gitlab-patronictl resume --wait gprd-patroni-main-pg12-2004
  • Enable Chef on all Patroni nodes: knife ssh 'roles:gprd-base-db-patroni-main-2004' 'sudo chef-client-enable

CI

  • Disable Chef on all Patroni nodes: knife ssh 'roles:gprd-base-db-patroni-ci-2004' 'sudo chef-client-disable "Upgrading Consul - see: "https://gitlab.com/gitlab-com/gl-infra/production/-/issues/8040"
  • Put the Patroni cluster in maintenance mode to avoid failovers when cycling Consul: sudo gitlab-patronictl pause --wait gprd-patroni-ci-pg12-2004
  • One node at a time, cycle Consul: knife ssh -C 1 'roles:gprd-base-db-patroni-ci-2004' 'sudo systemctl restart consul; sleep 20'
  • Check consul members to ensure they're all running 1.13.3
  • Unpause the cluster: sudo gitlab-patronictl resume --wait gprd-patroni-ci-pg12-2004
  • Enable Chef on all Patroni nodes: knife ssh 'roles:gprd-base-db-patroni-ci-2004' 'sudo chef-client-enable

Registry

  • Disable Chef on all Patroni nodes: knife ssh 'roles:gprd-base-db-patroni-registry' 'sudo chef-client-disable "Upgrading Consul - see: "https://gitlab.com/gitlab-com/gl-infra/production/-/issues/8040"
  • Put the Patroni cluster in maintenance mode to avoid failovers when cycling Consul: sudo gitlab-patronictl pause --wait gprd-pg12-patroni-registry
  • One node at a time, cycle Consul: knife ssh -C 1 'roles:gprd-base-db-patroni-registry' 'sudo systemctl restart consul; sleep 20'
  • Check consul members to ensure they're all running 1.13.3
  • Unpause the cluster: sudo gitlab-patronictl resume --wait gprd-pg12-patroni-registry
  • Enable Chef on all Patroni nodes: knife ssh 'roles:gprd-base-db-patroni-registry' 'sudo chef-client-enable

PG Bouncers

  • Restart Consul with 20s gaps between restarts: knife ssh --no-host-key-verify -C 1 'fqdn:pgbouncer-*gprd*' 'pgrep -a consul | grep -q 1.7.2 && ( sudo systemctl restart consul; sleep 20 )'

Wrap up

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes

  • Revert MR that bumps the Consul version
  • Perform same procedure described in this CR
  • Set label changeaborted /label ~change::aborted

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Gonzalo Servat