[GPRD] Increase huge pages in C3-Highmem-127 Patroni nodes to allocate larger shared memory
Production Change
Change Summary
As discussed at production-engineering#24795 (comment 1893102328) we need to reserve a larger number of Huge Pages to allocate a larger shared_buffers
for the c3-highmem-127 Patroni nodes. Otherwise the huge pages are not used by Postgres and postgres have to allocate regular 4k pages causing memory management overhead.
Current vm.nr_hugepages option value: 58368 (114GB)
Required vm.nr_hugepages option value: 180736 (for shared_buffers=352GB + min_dynamic_shared_memory=0GB with additional 1GB)
Recommended vm.nr_overcommit_hugepages option value for 10% of RAM: 70954 (138GB)
Change Details
- Services Impacted - ServicePatroni ServicePostgres
- Change Technician - @rhenchen.gitlab
- Change Reviewer - @bshah11 @alexander-sosna
- Time tracking - 2 hours
- Downtime Component - no downtime
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 60 minutes
-
Set label changein-progress /label ~change::in-progress
-
Create Silence env=gpr
fqdn=patroni-main-v14-110-db-gprd.c.gitlab-production.internal
-
Set node as under maintenance
to drain workload out of it-
Set the role role[gprd-base-db-patroni-maintenance]
into the node runlist and run chef-client on it:
knife node run_list add patroni-main-v14-110-db-gprd.c.gitlab-production.internal "role[gprd-base-db-patroni-maintenance]" ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "sudo chef-client"
-
Wait for the workload to be drained - Thanos dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV +short gitlab-psql -qtc "SELECT count(*) FROM pg_stat_activity WHERE pid <> pg_backend_pid() AND datname = 'gitlabhq_production' AND state <> 'idle' AND usename <> 'gitlab-monitor' AND usename <> 'postgres_exporter';" for c in /usr/local/bin/pgb-console*; do $c -c 'SHOW CLIENTS;' | grep gitlabhq_production | grep -v gitlab-monitor; done | wc -l
-
-
Merger MR: https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/4733 and run chef client ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "sudo chef-client"
-
Check the new huge page settings and if is reflected in meminfo - HugePages_Total
ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "sudo sysctl -a | grep hugepages" ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "cat /proc/meminfo | grep HugePages"
-
Bounce postgresql in the node to start using Huge Pages ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "sudo sudo gitlab-patronictl restart gprd-patroni-main-v14 patroni-main-v14-110-db-gprd.c.gitlab-production.internal"
-
Wait for the node to get back in sync and check if postgres is allocating hugepages ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "sudo gitlab-patronictl list" ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "gitlab-psql -c 'show huge_pages;'" ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "cat /proc/meminfo | grep HugePages"
- Check if
meminfo - HugePages_Free
is smaller thanmeminfo - HugePages_Total
;
- Check if
-
Remove the node from maintenance
to resume workload-
Remove the role role[gprd-base-db-patroni-maintenance]
from the node runlist and run chef-client on it:knife node run_list remove patroni-main-v14-110-db-gprd.c.gitlab-production.internal "role[gprd-base-db-patroni-maintenance]" ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "sudo chef-client"
-
Wait for the workload to be resumed in the node - Thanos
-
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 10 minutes
-
Set the node as under maintenance
to drain workload out of it:-
Set the role role[gprd-base-db-patroni-maintenance]
into the node runlist and run chef-client on it:
knife node run_list add patroni-main-v14-110-db-gprd.c.gitlab-production.internal "role[gprd-base-db-patroni-maintenance]" ssh patroni-main-v14-110-db-gprd.c.gitlab-production.internal "sudo chef-client"
-
Wait for the workload to be drained - Thanos dig @127.0.0.1 -p 8600 db-replica.service.consul. SRV +short gitlab-psql -qtc "SELECT count(*) FROM pg_stat_activity WHERE pid <> pg_backend_pid() AND datname = 'gitlabhq_production' AND state <> 'idle' AND usename <> 'gitlab-monitor' AND usename <> 'postgres_exporter';" for c in /usr/local/bin/pgb-console*; do $c -c 'SHOW CLIENTS;' | grep gitlabhq_production | grep -v gitlab-monitor; done | wc -l
-
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: patroni Service Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&from=now-3h&to=now&viewPanel=379598196
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
-
Metric: pgbouncer SLI Error Ratio for the Main Cluster
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&from=now-3h&to=now&viewPanel=3935075118
- What changes to this metric should prompt a rollback: sustained error rate increasing for more than 10 minutes
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Rafael Henchen