2023-09-21: GPRD - Add a Patroni replica node for the Main cluster

Production Change

Change Summary

Related issue: #16404 (closed)

It's a known problem, previously investigated at scalability#2301, #15987 (closed)

With this CR, we are adding a replica to the GPRD Main Cluster to reduce the likelihood of lwlock lock_manager saturation in the Replica nodes.

At the moment, the team has agreed to implement a short-team solution as outlined here #16404 (comment 1570026117) by adding replica(s) to the patroni cluster to help us avoid the ongoing production issues referenced here.

A prior similar CR to add a replica: #16029 (closed), #8576 (closed)

Change Details

  1. Services Impacted - ServicePatroni ServicePostgres
  2. Change Technician - @bshah11 @rhenchen.gitlab @alexander-sosna
  3. Change Reviewer - @ahanselka @NikolayS @rhenchen.gitlab @bshah11 @alexander-sosna
  4. Time tracking - 180 minutes
  5. Downtime Component - no downtime

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 2 hours

  • At non-peak period
  • Set label changein-progress /label ~change::in-progress
  • Merge and apply TF to deploy the new nodes: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/6799
  • Wait VMs to be created (it can take a while to restore the data disk snapshot)
  • Wait for the boostrap to finish (check the nodes serial port)
  • Mark the node as under maintenance (to be out of load balance while it gets in sync)
    • Execute knife node run_list add patroni-main-v14-109-db-gprd.c.gitlab-production.internal "role[gprd-base-db-patroni-maintenance]"
    • Execute ssh patroni-main-v14-109-db-gprd.c.gitlab-production.internal "sudo chef-client"
  • Start patroni on new node:
    • Execute sudo systemctl enable patroni && systemctl start patroni
  • Wait for the new nodes to perform WAL recover:
    • Check postgresql logs sudo tail -n 500 -f /var/log/gitlab/postgresql/postgresql.csv
  • Check if node is part of patroni cluster:
    • Execute sudo gitlab-patronictl list
  • Mark the node as out of maintenance (to get into load balance)
    • Execute knife node run_list remove patroni-main-v14-109-db-gprd.c.gitlab-production.internal "role[gprd-base-db-patroni-maintenance]"
    • Execute ssh patroni-main-v14-109-db-gprd.c.gitlab-production.internal "sudo chef-client"
  • Check if node was added to the load balancer and is receiving workload
    • Checking for the node name in the list of replicas in Consul:
      dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV
    • Check Pgbouncer status:
      for c in /usr/local/bin/pgb-console*; do $c -c 'SHOW CLIENTS;'; done;
    • Check PostgreSQL for connected clients:
      sudo gitlab-psql -qc \
        "select count(*) from pg_stat_activity
        where backend_type = 'client backend'
        and pid <> pg_backend_pid()
        and datname <> 'postgres'"
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 15 minutes

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Biren Shah