[08/16/2022 - 00:00 UTC] - GSTG - Disable post failover maintenance script in the Patroni 2004
Production Change
Change Summary
We need to disable post failover/switchover maintenance in the Patroni 2004 cluster, because the vacumdb -analyze-only processes that starts 30 minutes after the promotion of a node will conflict with the GIN reindex CONCURRENTLY processes that are required in the OS Upgrade as pointed out at db-migration!299 (comment 1059259353)
The GIN reindexing can take several hours to execute.
The post-failover-maintenance.sh currently is only necessary after a DB MVU (Major Version Upgrade) to analyze and collect tables statistics that are zeroed during a MVU.
This MR needs to be applied the CR [08/17/2022 - 00:00 UTC] - Patroni Clusters OS Upgrade - GSTG - Final - #7577 (closed)
Change Details
- Services Impacted - ServicePatroni ServicePatroniCI
- Change Technician - @rhenchen.gitlab
- Change Reviewer - @bshah11
- Time tracking - 1 hour
- Downtime Component - yes
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 1 hour
-
Set label changein-progress /label ~change::in-progress -
Get green light from @sre-oncalland@release-managers -
Take node of the the current patroni leader nodes: ssh patroni-ci-2004-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl list" ssh patroni-main-2004-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl list" ssh patroni-ci-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl list" ssh patroni-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl list" -
Disable Chef Client on both gstg patroni-2004 clusters knife ssh "roles:gstg-base-db-patroni-2004" "sudo chef-client-disable" knife ssh "roles:gstg-base-db-patroni-ci-2004" "sudo chef-client-disable" knife ssh "roles:gstg-base-db-patroni" "sudo chef-client-disable" knife ssh "roles:gstg-base-db-patroni-ci" "sudo chef-client-disable" -
Merge MR - https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2188 -
Re-enable and Execute Chef Client in just one Reader node, to evaluate if Patroni service will restart with the modification ssh <reader_fqdn> "sudo chef-client-enable; sudo chef-client;" -
Che the logs to Evaluate if Chef restarted the Patroni service with the modification ssh <reader_fqdn> "tail -n 10000 /var/log/gitlab/patroni/patroni.log" ssh <reader_fqdn> "tail -n 10000 /var/log/gitlab/postgresql/postgresql.log" -
Check if the on_role_changesetting have changed on the evaluation node:ssh <reader_fqdn> "sudo cat /var/opt/gitlab/patroni/patroni.yml | grep on_role_change" -
Re-enable and Execute Chef Client in all nodes of both gstg patroni-2004 clusters knife ssh "roles:gstg-base-db-patroni-2004" "sudo chef-client-enable; sudo chef-client;" knife ssh "roles:gstg-base-db-patroni-ci-2004" "sudo chef-client-enable; sudo chef-client;" knife ssh "roles:gstg-base-db-patroni" "sudo chef-client-enable; sudo chef-client;" knife ssh "roles:gstg-base-db-patroni-ci" "sudo chef-client-enable; sudo chef-client;" -
Check if the on_role_changesetting have changed for each node in the 2004 patroni clusters ingstg:ssh_cluster_regex.sh "patroni.*gstg" "sudo cat /var/opt/gitlab/patroni/patroni.yml | grep on_role_change" -
Check Patroni logs and postgresql logs on the existing leaders to see if there was any restart/promotion -
Force reload of the Patroni clusters config files (no downtime) ssh patroni-main-2004-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl reload gstg-patroni-main-pg12-2004" ssh patroni-ci-2004-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl reload gstg-patroni-ci-2004" ssh patroni-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl reload pg12-ha-cluster-stg" ssh patroni-ci-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl reload gstg-patroni-ci" -
Force a switchover of the Patroni clusters to validate if the change had effect Downtime = up to 5 minutes ssh patroni-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl switchover pg12-ha-cluster-stg" ssh patroni-ci-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl switchover gstg-patroni-ci" -
Check Patroni logs on the new cluster Leaders -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 15 minutes
-
Revert MR - https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2188 -
Execute Chef Client on both gstg patroni-2004 clusters knife ssh "roles:gstg-base-db-patroni-2004" "sudo chef-client" knife ssh "roles:gstg-base-db-patroni-ci-2004" "sudo chef-client" -
Check if the on_role_changesetting have changed for each node in the 2004 patroni clusters ingstg:ssh_cluster_regex.sh "patroni.*2004.*gstg" "sudo cat /var/opt/gitlab/patroni/patroni.yml | grep on_role_change" -
Force reload of the Patroni clusters config files (no downtime) ssh patroni-ci-2004-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl reload gstg-patroni-main-pg12-2004" ssh patroni-ci-2004-01-db-gstg.c.gitlab-staging-1.internal "sudo gitlab-patronictl reload gstg-patroni-ci-2004" -
Set label changeaborted /label ~change::aborted
Monitoring
-
Chek if there was a node restart after the chef-client execution
- Location:
tail -n 10000 /var/log/gitlab/patroni/patroni.log tail -n 10000 /var/log/gitlab/postgresql/postgresql.log - If there was any node restart after the chef-client execution the change in Production needs to be made node by node
- Location:
-
Chek if the
post-failover-maintenance.shis being executed in the new Writer node after the switchover- Location:
ps -ef "/var/opt/gitlab/patroni/scripts/post-failover-maintenance.sh" - If the script is being executed after a promotion, the change didn't worked, rollback the change
- Location:
Key metrics to observe
There are no metrics that can point the sucess/failure of this change, the evaluation needs to be done through log analyzis
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.