[GSTG] - Launch Evaluation N2/N2D Patroni replica nodes for Main and CI clusters with the new hardware
Production Change
Change Summary
Related issue: https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/17307
In order to evaluate the hardware with real production workload, we'll deploy 1 node of each new proposed hardware as a Read Replica in our production environments (GSTG then GPRD). This will allow us to evaluate real production workload and compare performance and resource usage metrics between nodes with the old hardware and the new hardware.
In GSTG the patroni hardware currently is n1-standard-8
for all clusters, therefore the target hardware for GSTG will be n2-standard-8
.
Change Details
- Services Impacted - ServicePatroni ServicePostgres
- Change Technician - @rhenchen.gitlab
- Change Reviewer - @bshah11 @alexander-sosna
- Time tracking - 180 minutes
- Downtime Component - no downtime
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 2 hours
-
At non-peak period -
Set label changein-progress /label ~change::in-progress
-
Create patroni-nofailover
chef roles, Merge MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/3079 -
Merge and apply TF to deploy the new nodes: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5272 - patroni-main-2004: 1x n2-standard-8 and 1x n2d-standard-8
- patroni-main-2004: 1x n2-standard-8
-
Wait for the new nodes to perform WAL recover: - Check postgresql logs
sudo tail -n 500 -f /var/log/gitlab/postgresql/postgresql.csv
- Check postgresql logs
-
Check if nodes are part of their respective patroni clusters: - Execute
sudo gitlab-patronictl list
- Execute
-
Check if nodes were added to the load balancer and are receiving workload - Checking for the node name in the list of replicas in Consul:
dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV dig @127.0.0.1 -p 8600 ci-db-replica.service.consul. SRV
- Check Pgbouncer status:
for c in /usr/local/bin/pgb-console*; do $c -c 'SHOW CLIENTS;'; done;
- Check PostgreSQL for connected clients:
sudo gitlab-psql -qc \ "select count(*) from pg_stat_activity where backend_type = 'client backend' and pid <> pg_backend_pid() and datname <> 'postgres'"
- Checking for the node name in the list of replicas in Consul:
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 1 hour
-
Drain connections out of the new nodes (or the nodes to be destroyed). - For each of the nodes to be destroyed perform Steps 1, 2 and 3 of https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/patroni/scale-down-patroni.md#execution
-
Revert MR https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5272 to destroy the nodes -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: patroni Service Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gstg&from=now-3h&to=now&viewPanel=379598196
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: patroni Service Error Ratio for the CI Cluster
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci-overview?orgId=1&viewPanel=3846056002&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gstg
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: pgbouncer SLI Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gstg&from=now-3h&to=now&viewPanel=3935075118
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
- Metric: patroni Service Error Ratio for the CI Cluster
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci-overview?orgId=1&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gstg&viewPanel=3935075118
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Rafael Henchen