GSTG - C2 - Convert Patroni-CI to LVM
Production Change
Change Summary
Staging CR:
We'll replace all nodes in the patroni-ci database cluster with 4 disk LVM data volume, in order to mitigate GCP's 64 TB volume limitation for single persistent disk
As it is not possible to convert a "single disk" snapshots into "4 disk volume" snapshots, we will need to create an empty first node with LVM and then perform a slower wal-g backup restore, once created we can use this node to take "4 disk volume" snapshot to then create the remaining LVM nodes.
Rollout Strategy for Staging
-
Day 1
- Deploy 1st
lvm nodeas abackupnode (no workload on it and enabled to take snapshots); - Use
wal-gto restore the database into the 1stlvm node- it can take sevaral hours/days to fully restore; - Deploy 2nd
LVM nodeas an Active Replica through snapshot restore, this node will receive workload to test if there's any performance impact with real Gitlab workload for 24 hours;
- Deploy 1st
-
Day 2
- Create remaining
LVM nodesas Active Replicas; - Writer Switchover into a
LVM node; - Drain connections from all
single disk nodesactive Replicas;- Note: Let 7 days as a grace period to revert back into non-LVM
- Create remaining
-
Day 3
- Destroy all
single disk nodes;
- Destroy all
Change Details
- Services Impacted - ServicePatroniCI
- Change Technician - @rhenchen.gitlab
- Change Reviewer - @bshah11 @bprescott_ @alexander-sosna @vporalla
- Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2025-09-29 00:00
- Time tracking - 4560
- Downtime Component - no downtime
Important
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Preparation
Note
The following checklists must be done in advance, before setting the label changescheduled
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
The Change Criticality has been set appropriately and requirements have been reviewed. -
The change plan is technically accurate. -
The rollback plan is technically accurate and detailed enough to be executed by anyone with access. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
The change execution window respects the Production Change Lock periods. -
Once all boxes above are checked, mark the change request as scheduled: /label ~"change::scheduled" -
For C1 and C2 change issues, the change event is added to the GitLab Production calendar by the change-scheduler bot. It is schedule to run every 2 hours. -
For C1 change issues, a Senior Infrastructure Manager has provided approval with the manager_approved label on the issue. -
For C2 change issues, an Infrastructure Manager provided approval with the manager_approved label on the issue. -
Mention @gitlab-org/saas-platforms/inframanagersin this issue to request approval and provide visibility to all infrastructure managers. -
For C1, C2, or blocks deployments change issues, confirm with Release managers that the change does not overlap or hinder any release process (In #productionchannel, mention@release-managersand this issue and await their acknowledgment.)
Detailed steps for the change
Pre-execution steps
Note
The following steps should be done right at the scheduled time of the change request. The preparation steps are listed below.
-
Make sure all tasks in Change Technician checklist are done -
For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (Search the PagerDuty schedule for "SRE 8-hour" to find who will be on-call at the scheduled day and time. SREs on-call must be informed of plannable C1 changes at least 2 weeks in advance.) -
The SRE on-call provided approval with the eoc_approved label on the issue.
-
-
For C1, C2, or blocks deployments change issues, Release managers have been informed prior to change being rolled out. (In #productionchannel, mention@release-managersand this issue and await their acknowledgment.) -
There are currently no active incidents that are severity1 or severity2 -
If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Change steps - steps to take to execute the change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
Day 1 - 2025-09-29
-
Notify EOC -
Set label changein-progress /label ~change::in-progress -
Merge MR https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/12175 - Deploy the 1st LVM nodewith a 4 disk LVM volume within the CI cluster -
Perform a wal-g restore: sudo su - gitlab-psql rm -rf /var/opt/gitlab/postgresql/data17/* /usr/bin/envdir /etc/wal-g.d/env /opt/wal-g/bin/wal-g backup-fetch /var/opt/gitlab/postgresql/data17 LATEST -
Start Patroni in the 1st LVM nodeusingsudo service patroni startand wait it to get in sync; -
Run the snapshot script in the new LVM node sudo su - gitlab-psql -c "/usr/local/bin/gcs-snapshot.sh" -
Merge MR https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/12176 - Deploy a 2nd LVM node, as an Active Replica within the CI cluster and wait it to get in sync; -
Set label changescheduled /label ~change::scheduled
Day 2 - 2025-09-30
-
Notify EOC -
Set label changein-progress /label ~change::in-progress -
Compare the performance of LVM vs single disk nodes (check avg query execution time, I/O latency and CPU usage - check specifically for CPU:System and CPU:Wait) -
Merge MR https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/12177 - Deploy the remaining LVM Active Replica nodes in the CI cluster -
Wait for all LVM nodesto get in sync; -
Drain connections from single disk nodes, by granting themaintenanceChef role;NODES=("patroni-ci-v17-01-db-gstg.c.gitlab-staging-1.internal") NODES+=("patroni-ci-v17-02-db-gstg.c.gitlab-staging-1.internal") NODES+=("patroni-ci-v17-03-db-gstg.c.gitlab-staging-1.internal") NODES+=("patroni-ci-v17-04-db-gstg.c.gitlab-staging-1.internal") for node in ${NODES[@]}; do knife node run_list add "$node" "role[gstg-base-db-patroni-maintenance]" done knife ssh "roles:gstg-base-db-patroni-ci-v17" "sudo chef-client" -
Perform the Writer switchover to a LVM nodeusing the switchover_patroni_leader Ansible Playbook;-
Validate that the playbook inventory for gprd-patroni-ciis up to date with the most recent node list, including the recently deployedLVM nodes;export PYTHONUNBUFFERED=1 cd ~/src/db-migration/dbre-toolkit ansible -i inventory/gstg-ci.yml all -m ping ansible-playbook -i inventory/gstg-ci.yml switchover_patroni_leader.yml -e "non_interactive=false" 2>&1 | ts | tee -a ansible_switchover_patroni_leader_gstg-ci_$(date +%Y%m%d).log
-
-
Drain connections from the former Writer single disk node, by granting themaintenanceChef role;ssh knife node run_list add patroni-ci-v17-01-db-gstg.c.gitlab-staging-1.internal "role[gstg-base-db-patroni-maintenance]" ssh patroni-ci-v17-01-db-gstg.c.gitlab-staging-1.internal "sudo chef-client" -
Set label changescheduled /label ~change::scheduled
Day 3 - 2025-10-07
-
Notify EOC -
Set label changein-progress /label ~change::in-progress -
Merge MR https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/12189 - Destroy all single disk nodes -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 30 minutes
After Day 1
-
Revert the 2 MRs that deployend 1st and 2nd `LVM nodes: -
Set label changeaborted /label ~change::aborted
After Day 2
-
Rollback connections into single disk nodes, by removing themaintenanceChef role;NODES=("patroni-ci-v17-01-db-gstg.c.gitlab-staging-1.internal") NODES+=("patroni-ci-v17-02-db-gstg.c.gitlab-staging-1.internal") NODES+=("patroni-ci-v17-03-db-gstg.c.gitlab-staging-1.internal") NODES+=("patroni-ci-v17-04-db-gstg.c.gitlab-staging-1.internal") for node in ${NODES[@]}; do knife node run_list remove "$node" "role[gstg-base-db-patroni-maintenance]" done knife ssh "roles:gstg-base-db-patroni-ci-v17" "sudo chef-client" -
Wait single disk nodesto sync; -
Check which node is the Writer and remove it from the bellow connection draining list; -
Drain connections from lvm nodes, by granting themaintenanceChef role;NODES=("patroni-ci-v17-101-db-gstg.c.gitlab-staging-1.internal") NODES+=("patroni-ci-v17-102-db-gstg.c.gitlab-staging-1.internal") NODES+=("patroni-ci-v17-103-db-gstg.c.gitlab-staging-1.internal") NODES+=("patroni-ci-v17-104-db-gstg.c.gitlab-staging-1.internal") for node in ${NODES[@]}; do knife node run_list add "$node" "role[gstg-base-db-patroni-maintenance]" done knife ssh "roles:gstg-base-db-patroni-ci-v17" "sudo chef-client" -
Perform the Writer switchover to a single disk nodeusing the switchover_patroni_leader Ansible Playbook; -
Drain connections from the former Writer lvm node, by granting themaintenanceChef role;ssh knife node run_list add <former writer> "role[gstg-base-db-patroni-maintenance]" ssh <former writer> "sudo chef-client" -
Revert the MRs that deployed all the lvm nodes -
Set label changeaborted /label ~change::aborted
After Day 3
There's no rollback after Day 3. It's necessary to open a new CR to perform a new migration from LVM to single disk;
Monitoring
Key metrics to observe
-
Metric: Query Time in Transaction per Server
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci3a-overview?from=now-3h&orgId=1&timezone=utc&to=now&var-PROMETHEUS_DS=mimir-gitlab-gstg&var-environment=gstg&viewPanel=panel-126
- What changes to this metric should prompt a rollback: If
LVM nodesstarting with10xare significantly slower thansingle disk nodesstarting with0x
-
Metric: Disk Write Total Time
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci3a-overview?from=now-3h&orgId=1&timezone=utc&to=now&var-PROMETHEUS_DS=mimir-gitlab-gprd&var-environment=gprd&viewPanel=panel-85
- What changes to this metric should prompt a rollback: If
LVM nodesstarting with10xare significantly slower thansingle disk nodesstarting with0x
-
Metric: Disk Read Total Time
- Location: https://dashboards.gitlab.net/d/patroni-ci-main/patroni-ci3a-overview?from=now-3h&orgId=1&timezone=utc&to=now&var-PROMETHEUS_DS=mimir-gitlab-gprd&var-environment=gprd&viewPanel=panel-84
- What changes to this metric should prompt a rollback: If
LVM nodesstarting with10xare significantly slower thansingle disk nodesstarting with0x
-
Metric: CPU IO Wait
- Location: URL
- What changes to this metric should prompt a rollback: If
LVM nodesstarting with10xare significantly higher thansingle disk nodesstarting with0x
-
Metric: CPU System/Kernel
- Location: URL
- What changes to this metric should prompt a rollback: If
LVM nodesstarting with10xare significantly higher thansingle disk nodesstarting with0x