[GPRD] - Launch Evaluation Patroni replica nodes for Main and CI clusters with the new hardware
Production Change
Change Summary
Related issue: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/17307
GSTG CR: #8569 (closed)
In order to evaluate the hardware with real production workload, we'll deploy 1 node of each new proposed hardware as a Read Replica in our production environments (GPRD then GPRD). This will allow us to evaluate real production workload and compare performance and resource usage metrics between nodes with the old hardware and the new hardware.
The target hardware for our Patroni Main cluster in GPRD is n2-highmem-128
, and for the Patroni CI cluster we'll test the n2-highmem-96
(see &851 (closed) (comment 1319975492)). But we should evaluate also the n2d-standard-224
in the Main cluster for comparison purposes.
Change Details
- Services Impacted - ServicePatroni ServicePostgres
- Change Technician - @rhenchen.gitlab
- Change Reviewer - @bshah11 @alexander-sosna
- Time tracking - 180 minutes
- Downtime Component - no downtime
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 2 hours
-
At non-peak period -
Set label changein-progress /label ~change::in-progress
-
Check if gprd-base-db-patroni-nofailover chef role was created by https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/3079 - Run
knife list roles/ | grep patroni-nofailover
- Run
-
Merge and apply TF to deploy the new nodes: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5305 - patroni-main-2004: 1x n2-highmem-128 and 1x n2d-standard-224
- patroni-main-2004: 1x n2-highmem-96
-
Wait VMs to be created (it can take a while to restore the data disk snapshot) -
Add instances to instance groups: manually execute the Plan and Apply jobs in the MR https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5305 pipeline -
Wait for the boostrap to finish (check the nodes serial port) -
Start patroni on each new node: - Execute
sudo systemctl enable patroni && systemctl start patroni
- Execute
-
Wait for the new nodes to perform WAL recover: - Check postgresql logs
sudo tail -n 500 -f /var/log/gitlab/postgresql/postgresql.csv
- Check postgresql logs
-
Check if nodes are part of their respective patroni clusters: - Execute
sudo gitlab-patronictl list
- Execute
-
Check if nodes were added to the load balancer and are receiving workload - Checking for the node name in the list of replicas in Consul:
dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV dig @127.0.0.1 -p 8600 ci-db-replica.service.consul. SRV
- Check Pgbouncer status:
for c in /usr/local/bin/pgb-console*; do $c -c 'SHOW CLIENTS;'; done;
- Check PostgreSQL for connected clients:
sudo gitlab-psql -qc \ "select count(*) from pg_stat_activity where backend_type = 'client backend' and pid <> pg_backend_pid() and datname <> 'postgres'"
- Checking for the node name in the list of replicas in Consul:
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 15 minutes
-
Drain connections out of the new nodes (or the nodes to be destroyed). - For each of the nodes to be destroyed perform Steps 1, 2 and 3 of https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/patroni/scale-down-patroni.md#execution
-
Revert MR https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5305 to destroy the nodes -
Remove instances from instance groups: manually execute the Plan and Apply jobs in the revert MR pipeline -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: patroni Service Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&from=now-3h&to=now&viewPanel=379598196
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: patroni Service Error Ratio for the CI Cluster
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci-overview?orgId=1&viewPanel=3846056002&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: pgbouncer SLI Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&from=now-3h&to=now&viewPanel=3935075118
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: patroni Service Error Ratio for the CI Cluster
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci-overview?orgId=1&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd&viewPanel=3935075118
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.