[GPRD] Remove N1 nodes from the Patroni main and CI clusters
Related issue: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/17307
CR executed in GSTG: #8634 (closed)
In order to evaluate the hardware with real production workload, we already deployed N2 and N2d nodes as a Read Replica in our production environments (GSTG then GPRD). Now we plan to remove old N1 nodes to increase the workload per node on the new hardware.
The nodes to be removed are:
- patroni-main-2004-07-gprd
- patroni-main-2004-06-gprd
- patroni-ci-2004-04-gprd
Change Details
- Services Impacted - ServicePatroni ServicePostgres
- Change Technician - @rhenchen.gitlab
- Change Reviewer - @bshah11 @alexander-sosna
- Time tracking - 180 minutes
- Downtime Component - no downtime
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 2 hours
-
At non-peak period -
Set label changein-progress /label ~change::in-progress
-
Drain connections from node patroni-main-2004-07-gprd
-
Disable chef on the host to be removed: sudo chef-client-disable "Scale down Patroni cluster - CR https://gitlab.com/gitlab-com/gl-infra/production/-/issues/8648"
-
Add a tags section to /var/opt/gitlab/patroni/patroni.yml
on the node:tags: nofailover: true noloadbalance: true to_be_destroyed: true
-
Reload Patroni config: sudo systemctl reload patroni
-
Test the efficacy of that reload by checking for the node name in the list of replicas. If the name is absent, then the reload worked. dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV dig @127.0.0.1 -p 8600 ci-db-replica.service.consul. SRV
-
Wait until all client connections are drained from the replica (it depends on the interval value set for the clients), use this command to track number of client connections. It can take a few minutes until all connections are gone. If there are still a few connections on pgbouncers after 5m you can check if there are actually any active connections in the DB (should be 0 most of the time): for c in /usr/local/bin/pgb-console*; do $c -c 'SHOW CLIENTS;' | grep gitlabhq_production | grep -v gitlab-monitor; done | wc -l; gitlab-psql -qtc "SELECT count(*) FROM pg_stat_activity WHERE pid <> pg_backend_pid() AND datname = 'gitlabhq_production' AND state <> 'idle' AND usename <> 'gitlab-monitor' AND usename <> 'postgres_exporter';"
-
-
Drain connections from node patroni-main-2004-06-gprd
-
Execute same steps as used to drain conections from the node above
-
-
Drain connections from node patroni-ci-2004-04-gprd
-
Execute same steps as used to drain conections from the node above
-
-
Confirm Patroni-Main Metrics and Patroni-CI Metrics still looks healthy, especially for replica nodes -
Set label changescheduled -> /label ~"change::scheduled"
-
Update this issue due date for 1 week from today -
After 1 week that the environment is stable within less of 70% saturation on any resource usage we can permanently remove the elected Replicas (already out of load balance) -
Set label changein-progress /label ~change::in-progress
-
Permanently destroy the drained Replicas by merging MR: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5376 -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 10minutes to 4hours
- If the nodes were NOT Destroyed:
-
Restore node patroni-main-2004-07-gprd
into load balance- Restart chef in a node to restore patroni flags to default (add back into load balance)
-
Enable Chef sudo chef-client-enable
-
Re-run chef on the node sudo chef-client
-
-
Check if the node got back into the replication load balance list. dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV dig @127.0.0.1 -p 8600 ci-db-replica.service.consul. SRV
- If the node is not back into loadbalance, run a patroni reload config:
sudo systemctl reload patroni
- Restart chef in a node to restore patroni flags to default (add back into load balance)
-
Restore node patroni-main-2004-06-gprd
into load balance-
Execute same steps as used to recover the node above
-
-
Restore node patroni-ci-2004-04-gprd
into load balance-
Execute same steps as used to recover the node above
-
-
- If the nodes WERE Destroyed:
-
Revert MR https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5376 -
Recover node patroni-main-2004-07-gprd
-
Wait VMs to be created (it can take a while to restore the data disk snapshot) -
Add instances to instance groups: manually execute the Plan and Apply jobs in the MR pipeline -
Wait for the boostrap to finish (check the nodes serial port) -
Start patroni on each new node: - Execute
sudo systemctl enable patroni && systemctl start patroni
- Execute
-
Wait for the new nodes to perform WAL recover: - Check postgresql logs
sudo tail -n 500 -f /var/log/gitlab/postgresql/postgresql.csv
- Check postgresql logs
-
Check if nodes are part of their respective patroni clusters: - Execute
sudo gitlab-patronictl list
- Execute
-
-
Recover node patroni-main-2004-06-gprd
-
Execute same steps as used to recover the node above
-
-
Recover node patroni-ci-2004-04-gprd
-
Execute same steps as used to recover the node above
-
-
-
Check if nodes were added to the load balancer and are receiving workload - Checking for the node name in the list of replicas in Consul:
dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV dig @127.0.0.1 -p 8600 ci-db-replica.service.consul. SRV
- Check Pgbouncer status:
for c in /usr/local/bin/pgb-console*; do $c -c 'SHOW CLIENTS;'; done;
- Check PostgreSQL for connected clients:
sudo gitlab-psql -qc \ "select count(*) from pg_stat_activity where backend_type = 'client backend' and pid <> pg_backend_pid() and datname <> 'postgres'"
- Checking for the node name in the list of replicas in Consul:
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: patroni Service Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&from=now-3h&to=now&viewPanel=379598196
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: patroni Service Error Ratio for the CI Cluster
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci-overview?orgId=1&viewPanel=3846056002&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: pgbouncer SLI Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&from=now-3h&to=now&viewPanel=3935075118
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: patroni Service Error Ratio for the CI Cluster
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci-overview?orgId=1&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd&viewPanel=3935075118
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Rafael Henchen