[GPRD] - Add a Patroni replica node for the Main cluster
Production Change
Change Summary
Related issue: #15987 (closed)
It's a known problem, previously investigated at scalability#2301
With this CR, we are adding a replica to the GPRD Main Cluster to reduce the likelihood of lwlock lock_manager saturation in the Replica nodes.
At the moment, the team has agreed to implement a short-team solution as outlined here #15987 (comment 1467449112) by adding replica(s) to the patroni cluster to help us avoid the ongoing production issues referenced here.
A prior similar CR to add a replica: #8576 (closed)
Change Details
- Services Impacted - ServicePatroni ServicePostgres
- Change Technician - @bshah11 @rhenchen.gitlab @alexander-sosna
- Change Reviewer - @NikolayS @rhenchen.gitlab @bshah11 @alexander-sosna
- Time tracking - 180 minutes
- Downtime Component - no downtime
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 2 hours
-
At non-peak period -
Set label changein-progress /label ~change::in-progress -
Merge and apply TF to deploy the new nodes: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/6284 -
Wait VMs to be created (it can take a while to restore the data disk snapshot) -
Wait for the boostrap to finish (check the nodes serial port) -
Mark the node as under maintenance (to be out of load balance while it gets in sync) - Execute
knife node run_list add patroni-main-2004-108-db-gprd.c.gitlab-production.internal "role[gprd-base-db-patroni-maintenance]" - Execute
ssh patroni-main-2004-108-db-gprd.c.gitlab-production.internal "sudo chef-client"
- Execute
-
Start patroni on new node: - Execute
sudo systemctl enable patroni && systemctl start patroni
- Execute
-
Wait for the new nodes to perform WAL recover: - Check postgresql logs
sudo tail -n 500 -f /var/log/gitlab/postgresql/postgresql.csv
- Check postgresql logs
-
Check if node is part of patroni cluster: - Execute
sudo gitlab-patronictl list
- Execute
-
Mark the node as out of maintenance (to get into load balance) - Execute
knife node run_list remove patroni-main-2004-108-db-gprd.c.gitlab-production.internal "role[gprd-base-db-patroni-maintenance]" - Execute
ssh patroni-main-2004-108-db-gprd.c.gitlab-production.internal "sudo chef-client"
- Execute
-
Check if node was added to the load balancer and is receiving workload - Checking for the node name in the list of replicas in Consul:
dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV - Check Pgbouncer status:
for c in /usr/local/bin/pgb-console*; do $c -c 'SHOW CLIENTS;'; done; - Check PostgreSQL for connected clients:
sudo gitlab-psql -qc \ "select count(*) from pg_stat_activity where backend_type = 'client backend' and pid <> pg_backend_pid() and datname <> 'postgres'"
- Checking for the node name in the list of replicas in Consul:
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 15 minutes
-
Drain connections out of the new nodes (or the nodes to be destroyed). - For the node to be destroyed perform Steps 1, 2, and 3 of https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/patroni/scale-down-patroni.md#execution
-
Revert MR https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/6284 to destroy the node -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: patroni Service Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&from=now-3h&to=now&viewPanel=379598196
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: patroni Service Error Ratio for the CI Cluster
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci-overview?orgId=1&viewPanel=3846056002&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: pgbouncer SLI Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&from=now-3h&to=now&viewPanel=3935075118
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: patroni Service Error Ratio for the CI Cluster
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci-overview?orgId=1&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd&viewPanel=3935075118
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Rafael Henchen