CR [GPRD] Reduce max_wal_size from 128GB to 64GB
Production Change
Change Summary
This iterates on the following issues by reducing max_wal_size
from 128GB
to 64GB
.
- CR [GPRD] Increase max_wal_size from 64GB to 128GB
- CR [GPRD] Increase max_wal_size from 32GB to 64GB
- CR [GPRD] Increase max_wal_size from 16GB to 32GB
This is because we discovered diminishing returns from the last iteration.
Change Details
- Services Impacted - ServicePatroni, ServicePatroniCI
- Change Technician - @alexander-sosna
- Change Reviewer -
- Time tracking - < 60 minutes
- Downtime Component - No downtime
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - < 60
-
Set label changein-progress /label ~change::in-progress
-
Merge MR [db-benchmarking] Reduce max_wal_size to 64GB -
Merge MR Reduce max_wal_size from 128GB to 64GB to change value of max_wal_size
parameter to64GB
.
Note: No further step is strictly necessary, for Chef will automatically apply all changes. All steps beyond this point are to monitor a smooth operation and to reduce latency should action become required.
Bump max_wal_size on CI
-
Check which node is Leader
and which havenoloadbalance
set,knife ssh 'patroni-ci-v14-02-db-gprd.c.gitlab-production.internal' "sudo gitlab-patronictl list"
-
Run Chef on noloadbalance
node, e.g.knife ssh 'patroni-ci-v14-02-db-gprd.c.gitlab-production.internal' "sudo chef-client"
-
Validate that the new value is set, knife ssh 'gprd-base-db-patroni-ci-v14' "gitlab-psql --csv -t -c 'SHOW max_wal_size;'"
-
Observe metrics -
Run Chef on all nodes, on two nodes parallel, knife ssh -C2 'gprd-base-db-patroni-ci-v14' "sudo chef-client"
, while observing the metrics -
Validate that the new value is set, knife ssh 'gprd-base-db-patroni-ci-v14' "gitlab-psql --csv -t -c 'SHOW max_wal_size;'"
-
Observe metrics
Bump max_wal_size on main
-
Check which node is Leader
and which havenoloadbalance
set,knife ssh 'patroni-main-v14-02-db-gprd.c.gitlab-production.internal' "sudo gitlab-patronictl list"
-
Run Chef on noloadbalance
node, e.g.knife ssh 'patroni-main-v14-02-db-gprd.c.gitlab-production.internal' "sudo chef-client"
-
Validate that the new value is set, knife ssh 'gprd-base-db-patroni-main-v14' "gitlab-psql --csv -t -c 'SHOW max_wal_size;'"
-
Observe metrics -
Run Chef on all nodes, on two nodes parallel, knife ssh -C2 'gprd-base-db-patroni-main-v14' "sudo chef-client"
, while observing the metrics -
Validate that the new value is set, knife ssh 'gprd-base-db-patroni-main-v14' "gitlab-psql --csv -t -c 'SHOW max_wal_size;'"
-
Observe metrics -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Revert MR Reduce max_wal_size from 128GB to 64GB. -
Run the following steps in parallel -
Run Chef on all nodes, on two nodes parallel, knife ssh -C2 'gprd-base-db-patroni-ci-v14' "sudo chef-client"
, while observing the metrics -
Run Chef on all nodes, on two nodes parallel, knife ssh -C2 'gprd-base-db-patroni-main-v14' "sudo chef-client"
, while observing the metrics
-
-
Validate that the old value is set, knife ssh 'gprd-base-db-patroni-ci-v14' "gitlab-psql --csv -t -c 'SHOW max_wal_size;'"
-
Validate that the old value is set, knife ssh 'gprd-base-db-patroni-main-v14' "gitlab-psql --csv -t -c 'SHOW max_wal_size;'"
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric: CI transactions_replica SLI Error Ratio
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci3a-overview?orgId=1&var-PROMETHEUS_DS=PA258B30F88C30650&var-environment=gprd&viewPanel=438786388&from=now-30m&to=now
- What changes to this metric should prompt a rollback: increase of postgresql error ratio on nodes after resize of their filesystem should prompt removing and rebuilding the affected Replica
-
Metric: CI transactions_primary SLI Error Ratio
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci3a-overview?orgId=1&var-PROMETHEUS_DS=PA258B30F88C30650&var-environment=gprd&viewPanel=2278903563&from=now-30m&to=now
- What changes to this metric should prompt a rollback: increase of postgresql error ratio on nodes after resize of their filesystem should prompt a switchover of the Primary into a healthy Replica, and then removing and rebuilding the affected node
-
Metric: MAIN transactions_replica SLI Error Ratio
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-ci3a-overview?orgId=1&var-PROMETHEUS_DS=PA258B30F88C30650&var-environment=gprd&viewPanel=438786388&from=now-30m&to=now
- What changes to this metric should prompt a rollback: increase of postgresql error ratio on nodes after resize of their filesystem should prompt removing and rebuilding the affected Replica
-
Metric: MAIN transactions_primary SLI Error Ratio
- Location: https://dashboards.gitlab.net/d/patroni-main/patroni-ci3a-overview?orgId=1&var-PROMETHEUS_DS=PA258B30F88C30650&var-environment=gprd&viewPanel=2278903563&from=now-30m&to=now
- What changes to this metric should prompt a rollback: increase of postgresql error ratio on nodes after resize of their filesystem should prompt a switchover of the Primary into a healthy Replica, and then removing and rebuilding the affected node
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Alexander Sosna