Perform Main PG14 upgrade test in db-benchmarking - Review and test the fix to avoid data corruption during the upgrade playbook
Summary
Before performing any PG14 upgrade test for the Main cluster in the GPRD environment, we would like to validate the entire process in the db-benchmarking
environment first. Once we feel comfortable with all the ansible-playbook changes/enhancements and validate the PG14 upgrade procedure in the db-benchmarking
we will test it in the GPRD environment prior to the actual GPRD Main PG14 upgrade.
Reference issues:
- Build a PG12 Source and Target Standby Main cluster in db-benchmarking env
- Incident Review: Data corruption during Upgrade PostgreSQL to PG14 on CI cluster in GPRD
- Execute PG14 Upgrade, validate logical replication lag and execute performance tests in the CI production cluster
- https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/23986
- https://gitlab.slack.com/archives/C011GC1UWQ5/p1692044538243379?thread_ts=1692044242.251799&cid=C011GC1UWQ5
Acceptance criteria
db-benchmarking
Execute the PG14 Upgrade test in the -
Review and test the ansible-playbook to avoid data corruption during the upgrade playbook -
Test with multiple logical replication slots -
Implement and test reverse replication for the test cluster -
Test switchover
db-benchmarking Change
Change Summary
As discussed in https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/17594, we want to test most steps of the upcoming PostgreSQL 14 upgrade.
The only step we are not testing is the switchover to the new cluster (applies to only a test in GPRD env).
We will execute the procedure described in the action steps and observe the behavior.
Prior to this, we performed identical CR production#15759 (closed) for the CI cluster and also performed production#10855 (closed) , production#8611 (closed) and production#8290 (closed), a similar test in nature on Main and Registry clusters, but without the now developed upgrade steps.
Change Details
- Services Impacted - Database
- Change Technician - @NikolayS @bshah11 @vitabaks
- Change Reviewer - @NikolayS @alexander-sosna @vitabaks @rhenchen.gitlab
- Time tracking - 4 hours
- Downtime Component - 0
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 15 min
-
REQUIREMENTS -
[db-benchmarking] Create Source db-benchmarking-patroni-ci-v12
and Targetdb-benchmarking-patroni-ci-v14
clusters for the next test -
Validate the test cluster is in sync not lagging more than a few seconds behind the source cluster root@patroni-ci-v14-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal:~# gitlab-psql -c 'SELECT now() - pg_last_xact_replay_timestamp();'
-
-
PREPARE -
Set label ~"change::in-progress" /label ~change::in-progress
-
Get the console VM ready for action -
SSH to the console VM ( ssh -A console-01-sv-db-benchmarking.c.gitlab-db-benchmarking.internal
) -
Configure dbupgrade user -
Disable screen sharing to reduce risk of exposing private key -
Chnage to user dbupgrade sudo su - dbupgrade
-
Copy dbupgrade user's private key from 1Password to ~/.ssh/id_dbupgrade
-
chmod 600 ~/.ssh/id_dbupgrade
-
Use key as default ln -s /home/dbupgrade/.ssh/id_dbupgrade /home/dbupgrade/.ssh/id_rsa
-
Enable re-screen sharing if benficial
-
-
Start a / resume the tmux session tmux a -t pg14 || tmux new -s pg14
-
Create an access_token with at least read_repository
for the next step -
Clone repos: rm -rf ~/src && mkdir ~/src cd ~/src git clone https://gitlab.com/gitlab-com/gl-infra/db-migration.git cd db-migration && git checkout switchover-multiple-slots
-
Ensure you have the pre-requisites installed: sudo apt install ansible
-
Ensure that Ansible can talk to all the hosts listed in the inventory file cd ~/src/db-migration/pg-upgrade-logical ansible -e "ansible_ssh_private_key_file=/home/dbupgrade/.ssh/id_dbupgrade" -i inventory/db-benchmarking-ci.yml all -m ping
-
Refresh tmux command and shortcut knowledge, https://tmuxcheatsheet.com/. To leave tmux without stopping it, use sequence Ctl-b, Ctrl-z
You shouldn't see any failed hosts!
-
-
-
On the current primary ( patroni-ci-2004-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
) To prevent Checking for incompatible polymorphic functions: fatal we proactivly re-create the extention pg_repackgitlab-psql \ -Xc "DROP EXTENSION pg_repack; " \ -Xc "CREATE EXTENSION pg_repack ;"
-
Generate load using pgbench
-
init
# init export PGPASSWORD=XXXXXXX pgbench \ -h 127.0.0.1 -i -U gitlab-superuser gitlabhq_production \ -s1000 --foreign-keys gitlab-psql -c 'alter table pgbench_history replica identity full' gitlab-psql -c 'create index concurrently on pgbench_accounts(bid)' gitlab-psql -c 'create index concurrently on pgbench_accounts(abalance)' gitlab-psql -c 'create index concurrently on pgbench_accounts(filler)'
-
run
# run for 10 hours, updates only export PGPASSWORD=XXXXXXX pgbench \ -h pgbouncer-main-v14-01-db-db-benchmarking.c.gitlab-db-benchmarking.internal -U gitlab-superuser gitlabhq_production \ -c20 -j20 \ -R1000 \ -P60 -T36000 \ -rn \ -b simple-update \ 2>&1 | ts | sudo tee -a /var/opt/gitlab/postgresql/pgbench_pgbouncer_01_$(date +%Y%m%d).log
-
-
UPGRADE Main action – On the console ( console-01-sv-db-benchmarking.c.gitlab-db-benchmarking.internal
) perform the physical-to-logical conversion and upgrade PostgreSQL on the Target cluster. Perform pre-check and resolve any pre-check errors first and then run Ansible playbook to upgrade the Target cluster (TODO update ansible-playbook step with multiple replication slots. Add steps for monitoring as well):cd ~/src/db-migration/pg-upgrade-logical ansible-playbook \ -e "ansible_ssh_private_key_file=/home/dbupgrade/.ssh/id_dbupgrade" \ -i inventory/db-benchmarking-ci.yml \ upgrade.yml -e "pg_old_version=12 pg_new_version=14 pg_publication_count=4" 2>&1 \ | ts | tee -a ansible_upgrade_db-benchmarking_ci_$(date +%Y%m%d).log
-
CHECK -
On the current primary ( patroni-ci-2004-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), run in tmux, log pg_wait_sampling profiles forwalsender
:for i in {1..720}; do # max duration 4 hours for num in {1..4}; do gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile where pid = (select pid from pg_stat_replication where application_name = 'logical_subscription_0$num') group by event_type, event order by of_events desc" \ 2>& 1 | ts | sudo tee -a /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_pg_wait_sampling_walsender_profile_0$num.log done sleep 60 done
-
On the target ( patroni-ci-v14-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), run in tmux, log pg_wait_sampling profiles forlogical worker
:for i in {1..720}; do # max duration 4 hours for num in {1..4}; do gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile where pid = (select pid from pg_stat_activity where application_name = 'logical_subscription_0$num') group by event_type, event order by of_events desc" \ 2>&1 | ts | sudo tee -a /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_pg_wait_sampling_logical_replication_worker_profile_0$num.log done sleep 60 done
-
On the current primary ( patroni-ci-2004-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), run in tmux, to log the logical replication details:for i in {1..10800}; do # max duration 4 hours gitlab-psql -tAXc " select now(), application_name, pg_wal_lsn_diff(pg_current_wal_lsn(), sent_lsn) as lag_pending_bytes, pg_wal_lsn_diff(sent_lsn, write_lsn) as lag_write_bytes, pg_wal_lsn_diff(write_lsn, flush_lsn) as lag_flush_bytes, pg_wal_lsn_diff(flush_lsn, replay_lsn) as lag_replay_bytes, pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) as lag_total_bytes, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn)) as lag_total_bytes_hr from pg_stat_replication from pg_stat_replication where application_name like 'logical_subscription%' " | sudo tee -a /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_pg_stat_replication.log sleep 1 done
-
On the current primary ( patroni-ci-2004-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), run in tmux, for log CPU usage from "walsender" worker that performs logical replication on the source using top:for i in {1..10800}; do # max duration 4 hours for num in {1..4}; do top -b -n 1 -p $( gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription_0$num'" ) | tail -n 1 | ts | sudo tee -a /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_walsender_cpu_usage_0$num.log done sleep 1 done
-
On the current primary ( patroni-ci-2004-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), run in tmux, for log disk I/O usage from "walsender" worker that performs logical replication on the source using iotop:for i in {1..10800}; do # max duration 4 hours for num in {1..4}; do sudo sh -c "iotop -p $( gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription_0$num'" ) -k -b -t -n 1 >> /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_walsender_iotop_0$num.log" done sleep 1 done
-
On the target ( patroni-ci-v14-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), run in tmux, for log CPU usage from "logical replication worker" that performs logical replication on the target using top:for i in {1..10800}; do # max duration 4 hours ps -aux | grep postgres | grep "logical replication worker" | awk '{print $2}' | xargs -n1 top -b -n 1 -p | ts | sudo tee -a /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_logical_replication_worker_cpu_usage.log sleep 1 done
-
On the target ( patroni-ci-v14-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), run in tmux, for log disk I/O usage from "logical replication worker" and other processes that write anything (checkpointer, etc.) usingiotop
:sudo iotop -okbt -n 10800 -d 1 2>&1 \ | sudo tee /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_iotop.log
-
On the target ( patroni-ci-v14-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), create an extension pg_wait_sampling immediately after completing the playbook:gitlab-psql -c "create extension if not exists pg_wait_sampling"
* [ ] Wait and observe the replication lag, PostgreSQL log, and the checks as above for 4 hours.-
On the target ( patroni-ci-v14-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), collect the wait events after the test is completed:# Wait events for logical replication worker (history) #for num in {1..4}; do gitlab-psql -c "select * from pg_wait_sampling_history where pid in (select pid from pg_stat_activity where backend_type ~ 'logical .* worker'); " | sudo tee -a /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_pg_wait_sampling_logical_replication_worker_history_all.log #done # Wait events - summary for logical replication worker #for num in {1..4}; do gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile where pid in (select pid from pg_stat_activity where backend_type ~ 'logical .* worker') group by event_type, event order by of_events desc" | sudo tee -a /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_pg_wait_sampling_logical_replication_worker_summary_all.log #done # Wait events - summary for Postgres gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile group by event_type, event order by of_events desc" | sudo tee -a /var/opt/gitlab/db-benchmarking_$(date +%Y%m%d)_pg_wait_sampling_profile_postgres_summary.log
-
On the target nodes ( patroni-ci-v14-[101..103]-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), run pg_amcheck. Run tmux and as a nohup command):-
sudo su - gitlab-psql
and start a / resume the tmux sessiontmux a -t pg14 || tmux new -s pg14
cd /tmp; nohup time sudo -u gitlab-psql -- /usr/lib/postgresql/14/bin/pg_amcheck -p 5432 -h localhost -U gitlab-superuser -d gitlabhq_production -j 96 --verbose -P --heapallindexed 2>&1 | tee -a /var/tmp/pg_amcheck.$(date "+%F-%H-%M").log &
-
-
On the target nodes ( patroni-ci-v14-[101..103]-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), review the pg_amcheck log files created in the previous steps to find out any data corruption errors.:cat /var/tmp/pg_amcheck.*.log | egrep 'ERROR:|DETAIL:|LOCATION:'
-
-
SWITCHOVER TEST -
run switchover in non-interactive mode
ansible-playbook \ -e "ansible_ssh_private_key_file=/home/dbupgrade/.ssh/id_dbupgrade" \ -i inventory/db-benchmarking-ci.yml \ switchover.yml -e "non_interactive=true" 2>&1 \ | ts | tee -a ansible_switchover_db-benchmarking_ci_$(date +%Y%m%d).log
-
check that traffic switched over -
check that reverse logical replication is working
-
-
SWITCHOVER ROLLBACK TEST -
run switchover in non-interactive mode
ansible-playbook \ -e "ansible_ssh_private_key_file=/home/dbupgrade/.ssh/id_dbupgrade" \ -i inventory/db-benchmarking-ci.yml \ switchover_rollback.yml -e "non_interactive=true" 2>&1 \ | ts | tee -a ansible_switchover_db-benchmarking_ci_$(date +%Y%m%d).log
-
check that traffic switched back
-
-
CLEANUP & FINISH Once all checks are done, stop replication and drop the slot: -
On the target ( patroni-ci-v14-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), remove subscription(s). (TODO add steps realted to cleaning up multiple replciation slots):for i in {1..4}; do sudo gitlab-psql \ -Xc "alter subscription logical_subscription_0$i disable" \ -Xc "alter subscription logical_subscription_0$i set (slot_name = none)" \ -Xc "drop subscription logical_subscription_0$i" done
-
On the source (current primary: patroni-ci-2004-101-db-db-benchmarking.c.gitlab-db-benchmarking.internal
), remove publication(s) and replication slot(s) (if exists)(TODO add steps realted to cleaning up multiple replciation slots):for i in {1..4}; do sudo gitlab-psql \ -Xc "drop publication logical_replication_0$i" \ -Xc "select pg_drop_replication_slot(slot_name) from pg_replication_slots where slot_name = 'logical_replication_slot_0$i'" \ -Xc "drop table if exists test_publication_0$i" done
-
-
Set label ~"change::complete" /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 3 min
-
Remove subscription(s) on the target (Standby Cluster leader)(TODO add steps realted to cleaning up multiple replciation slots): for i in {1..4}; do sudo gitlab-psql \ -Xc "alter subscription logical_subscription_0$i disable" \ -Xc "alter subscription logical_subscription_0$i set (slot_name = none)" \ -Xc "drop subscription logical_subscription_0$i" done
-
Remove publication(s) on the source (CI Cluster leader) and drop slot(s)(TODO add steps realted to cleaning up multiple replciation slots): for i in {1..4}; do sudo gitlab-psql \ -Xc "drop publication logical_replication_0$i" \ -Xc "select pg_drop_replication_slot(slot_name) from pg_replication_slots where slot_name = 'logical_replication_slot_0$i'" \ -Xc "drop table if exists test_publication_0$i" done
-
Stop any observation activities in tmux, if any (such as walsender CPU usage logging) -
Set label ~"change::aborted" /label ~change::aborted
Monitoring
Key metrics to observe
- pg14-upgrade dashboard - https://dashboards.gitlab.net/d/pg14-upgrade-main-pg14-upgrade/pg14-upgrade-postgres-upgrade-using-logical?orgId=1 (Since it is not configured for the db-benchmarking right now, we will use scripts to monitor)
* Metric patroni Service Apdex: https://dashboards.gitlab.net/d/patroni-ci/patroni-overview?orgId=1
* Metric CPU utilization: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-6h&to=now&viewPanel=13
* Metric for database load: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-3h&to=now&viewPanel=9&var-prometheus=Global&var-environment=gprd&var-type=patroni
- Metric: Disk utilization
- Location: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&viewPanel=10&var-prometheus=Global&var-environment=gprd&var-type=patroni
- What changes to this metric should prompt a rollback: If we start to allocate more disk linearly we need to rollback.
~~* Metric for WALs archiving rates: https://thanos.gitlab.net/graph?g0.expr=rate(walg_backup_completed_count%7Bfqdn%3D~%22patroni-ci-.*-gprd.c.gitlab-production.internal%22%7D%5B1m%5D)&g0.tab=0&g0.stacked=0&g0.range_input=6h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D~~
~~* New pg-upgrade dashboard https://dashboards.gitlab.net/d/pg14-upgrade-main-pg14-upgrade/pg14-upgrade-postgres-upgrade-using-logical?orgId=1&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd&var-cluster=ci~~
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- Change has been tested in staging and results noted in a comment on this issue.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.