[GPRD][2023-08-19 15:00 UTC] Execute PG14 Upgrade, validate logical replication lag and execute performance tests in the Main production cluster
Production Change
Change Summary
As discussed in https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/17594#note_1329882796, we want to test most steps of the upcoming PostgreSQL 14 upgrade. We can test the full procedure of creating a new cluster and upgrading it while receiving production traffic. The only step we are not testing is the switchover to the new cluster.
We're planning to do it on 2023-08-19
during the low-activity time.
The goal of this change is to evaluate the performance impacts and how robust the process is. We will execute the procedure described in the action steps and observe the behavior.
In addition, we will also be doing performance tests to evaluate PG14's performance against PG12.
Prior to this, we performed identical CR #15759 (closed) for the CI cluster and also performed #10855 (closed) , #8611 (closed) and #8290 (closed), a similar test in nature on Main and Registry clusters, but without the now developed upgrade steps.
Reference:
https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/23986
Change Details
- Services Impacted - Database
- Change Technician - @bshah11 @NikolayS
- Change Reviewer - @rhenchen.gitlab @NikolayS @alexander-sosna @vitabaks
- Time tracking - 10 hours
- Downtime Component - 0
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 15 min
-
REQUIREMENTS -
[GPRD] Rebuild gprd-patroni-main-v14
cluster with for the next test was successfully executed - https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/24283 -
Validate the test cluster is in sync not lagging more than a few seconds behind the source cluster root@patroni-main-v14-101-db-gprd.c.gitlab-production.internal:~# gitlab-psql -c 'SELECT now() - pg_last_xact_replay_timestamp();'
-
-
PREPARE -
Set label changein-progress /label ~change::in-progress
-
Disable the database async_index_creation
,async_index_operations
jobs that runs hourly on weekends:-
Disable feature flag by typing the following in #production
:-
/chatops run feature set database_async_index_operations false
-
/chatops run feature set database_async_index_creation false
-
-
-
Disable background migrations: -
Disable feature flags by typing the following into #production
:-
/chatops run feature set execute_batched_migrations_on_schedule false
-
/chatops run feature set execute_background_migrations false
-
-
-
Get the console VM ready for action -
SSH to the console VM ( ssh -A console-01-sv-gprd.c.gitlab-production.internal
) -
Configure dbupgrade user -
Disable screen sharing to reduce risk of exposing private key -
Chnage to user dbupgrade sudo su - dbupgrade
-
Copy dbupgrade user's private key from 1Password to ~/.ssh/id_dbupgrade
-
chmod 600 ~/.ssh/id_dbupgrade
-
Use key as default ln -s /home/dbupgrade/.ssh/id_dbupgrade /home/dbupgrade/.ssh/id_rsa
-
Enable re-screen sharing if benficial
-
-
Start a / resume the tmux session tmux a -t pg14 || tmux new -s pg14
-
Create an access_token with at least read_repository
for the next step -
Clone repos: rm -rf ~/src && mkdir ~/src cd ~/src git clone https://gitlab.com/gitlab-com/gl-infra/db-migration.git
-
Ensure you have the pre-requisites installed: sudo apt install ansible
-
Ensure that Ansible can talk to all the hosts listed in the inventory file cd ~/src/db-migration/pg-upgrade-logical ansible -e "ansible_ssh_private_key_file=/home/dbupgrade/.ssh/id_dbupgrade" -i inventory/gprd-main.yml all -m ping
-
Refresh tmux command and shortcut knowledge, https://tmuxcheatsheet.com/. To leave tmux without stopping it, use sequence Ctl-b, Ctrl-z
You shouldn't see any failed hosts!
-
-
-
UPGRADE Main action – On the console ( console-01-sv-gprd.c.gitlab-production.internal
) perform the physical-to-logical conversion and upgrade PostgreSQL on the Target cluster. Perform pre-check and resolve any pre-check errors first and then run Ansible playbook to upgrade the Target cluster:cd ~/src/db-migration/pg-upgrade-logical ansible-playbook \ -e "ansible_ssh_private_key_file=/home/dbupgrade/.ssh/id_dbupgrade" --tags "pre-checks, packages, upgrade-check" \ -i inventory/gprd-main.yml \ upgrade.yml -e "pg_old_version=12 pg_new_version=14" 2>&1 \ | ts | tee -a ansible_upgrade_gprd_main_$(date +%Y%m%d).log ansible-playbook \ -e "ansible_ssh_private_key_file=/home/dbupgrade/.ssh/id_dbupgrade" \ -i inventory/gprd-main.yml \ upgrade.yml -e "pg_old_version=12 pg_new_version=14" 2>&1 \ | ts | tee -a ansible_upgrade_gprd_main_$(date +%Y%m%d).log
-
CHECK -
On the current primary ( patroni-main-2004-101-db-gprd.c.gitlab-production.internal
), run in tmux, log pg_wait_sampling profiles forwalsender
:for i in {1..720}; do # max duration 4 hours for num in {1..4}; do gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile where pid = (select pid from pg_stat_replication where application_name = 'logical_subscription_0$num') group by event_type, event order by of_events desc" \ 2>& 1 | ts | sudo tee -a /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_pg_wait_sampling_walsender_profile_0$num.log done sleep 60 done
-
On the target ( patroni-main-v14-101-db-gprd.c.gitlab-production.internal
), run in tmux, log pg_wait_sampling profiles forlogical worker
:for i in {1..720}; do # max duration 4 hours for num in {1..4}; do gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile where pid = (select pid from pg_stat_activity where application_name = 'logical_subscription_0$num') group by event_type, event order by of_events desc" \ 2>&1 | ts | sudo tee -a /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_pg_wait_sampling_logical_replication_worker_profile_0$num.log done sleep 60 done
-
On the current primary ( patroni-main-2004-101-db-gprd.c.gitlab-production.internal
), run in tmux, to log the logical replication details:for i in {1..10800}; do # max duration 4 hours gitlab-psql -tAXc " select now(), application_name, pg_wal_lsn_diff(pg_current_wal_lsn(), sent_lsn) as lag_pending_bytes, pg_wal_lsn_diff(sent_lsn, write_lsn) as lag_write_bytes, pg_wal_lsn_diff(write_lsn, flush_lsn) as lag_flush_bytes, pg_wal_lsn_diff(flush_lsn, replay_lsn) as lag_replay_bytes, pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) as lag_total_bytes from pg_stat_replication where application_name like 'logical_subscription%' " | sudo tee -a /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_pg_stat_replication.log sleep 1 done
-
On the current primary ( patroni-main-2004-101-db-gprd.c.gitlab-production.internal
), run in tmux, for log CPU usage from "walsender" worker that performs logical replication on the source using top:for i in {1..10800}; do # max duration 4 hours for num in {1..4}; do top -b -n 1 -p $( gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription_0$num'" ) | tail -n 1 | ts | sudo tee -a /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_walsender_cpu_usage_0$num.log done sleep 1 done
-
On the current primary ( patroni-main-2004-101-db-gprd.c.gitlab-production.internal
), run in tmux, for log disk I/O usage from "walsender" worker that performs logical replication on the source using iotop:for i in {1..10800}; do # max duration 4 hours for num in {1..4}; do sudo sh -c "iotop -p $( gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription_0$num'" ) -k -b -t -n 1 >> /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_walsender_iotop_0$num.log" done sleep 1 done
-
On the target ( patroni-main-v14-101-db-gprd.c.gitlab-production.internal
), run in tmux, for log CPU usage from "logical replication worker" that performs logical replication on the target using top:for i in {1..10800}; do # max duration 4 hours ps -aux | grep postgres | grep "logical replication worker" | awk '{print $2}' | xargs -n1 top -b -n 1 -p | ts | sudo tee -a /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_logical_replication_worker_cpu_usage.log sleep 1 done
-
On the target ( patroni-main-v14-101-db-gprd.c.gitlab-production.internal
), run in tmux, for log disk I/O usage from "logical replication worker" and other processes that write anything (checkpointer, etc.) usingiotop
:sudo iotop -okbt -n 10800 -d 1 2>&1 \ | sudo tee /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_iotop.log
-
On the target ( patroni-main-v14-101-db-gprd.c.gitlab-production.internal
), create an extension pg_wait_sampling immediately after completing the playbook:gitlab-psql -c "create extension if not exists pg_wait_sampling"
-
Wait and observe the replication lag, PostgreSQL log, and the checks as above for 4 hours. -
On the target ( patroni-main-v14-101-db-gprd.c.gitlab-production.internal
), collect the wait events after the test is completed:# Wait events for logical replication worker (history) #for num in {1..4}; do gitlab-psql -c "select * from pg_wait_sampling_history where pid in (select pid from pg_stat_activity where backend_type ~ 'logical .* worker'); " | sudo tee -a /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_pg_wait_sampling_logical_replication_worker_history_all.log #done # Wait events - summary for logical replication worker #for num in {1..4}; do gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile where pid in (select pid from pg_stat_activity where backend_type ~ 'logical .* worker') group by event_type, event order by of_events desc" | sudo tee -a /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_pg_wait_sampling_logical_replication_worker_summary_all.log #done # Wait events - summary for Postgres gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile group by event_type, event order by of_events desc" | sudo tee -a /var/opt/gitlab/production_upgrade_test_$(date +%Y%m%d)_pg_wait_sampling_profile_postgres_summary.log
-
On the target nodes ( patroni-main-v14-[102..108]-db-gprd.c.gitlab-production.internal
), run pg_amcheck. Run tmux and as a nohup command):-
sudo su - gitlab-psql
and start a / resume the tmux sessiontmux a -t pg14 || tmux new -s pg14
cd /tmp; nohup time /usr/lib/postgresql/14/bin/pg_amcheck -p 5432 -h localhost -U gitlab-superuser -d gitlabhq_production -j 96 --verbose -P --heapallindexed 2>&1 | tee -a /var/tmp/pg_amcheck.$(date "+%F-%H-%M").log &
-
-
On the target nodes ( patroni-main-v14-[102..108]-db-gprd.c.gitlab-production.internal
), review the pg_amcheck log files created in the previous steps to find out any data corruption errors.:egrep 'ERROR:|DETAIL:|LOCATION: /var/tmp/pg_amcheck.*.log'
-
-
CLEANUP & FINISH Once all checks are done, stop replication and drop the slot: -
Enable the database async_index_creation
,async_index_operations
jobs that runs hourly on weekends:-
Enable feature flag by typing the following in #production
:-
/chatops run feature set database_async_index_operations true
-
/chatops run feature set database_async_index_creation true
-
-
-
Enable background migrations: -
Enable feature flags by typing the following into #production
:-
/chatops run feature set execute_batched_migrations_on_schedule true
-
/chatops run feature set execute_background_migrations true
-
-
-
On the target ( patroni-main-v14-101-db-gprd.c.gitlab-production.internal
), remove subscription(s):sudo gitlab-psql \ -Xc "alter subscription logical_subscription disable" \ -Xc "alter subscription logical_subscription set (slot_name = none)" \ -Xc "drop subscription logical_subscription"
-
On the source (current primary: patroni-main-2004-101-db-gprd.c.gitlab-production.internal
), remove publication(s) and replication slot(s) (if exists):sudo gitlab-psql \ -Xc "drop publication logical_replication" \ -Xc "select pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'" \ -Xc "drop table if exists test_replication"
-
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 3 min
-
Remove subscription(s) on the target (Standby Cluster leader): sudo gitlab-psql \ -Xc "alter subscription logical_subscription disable" \ -Xc "alter subscription logical_subscription set (slot_name = none)" \ -Xc "drop subscription logical_subscription"
-
Remove publication(s) on the source (Main Cluster leader) and drop slot(s): sudo gitlab-psql \ -Xc "drop publication logical_replication" \ -Xc "select pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'" \ -Xc "drop table if exists test_replication"
-
Stop any observation activities in tmux, if any (such as walsender CPU usage logging) -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric patroni Service Apdex: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1
- Metric CPU utilization: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-6h&to=now&viewPanel=13
- Metric for database load: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-3h&to=now&viewPanel=9&var-prometheus=Global&var-environment=gprd&var-type=patroni
- Metric: Disk utilization
- Location: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&viewPanel=10&var-prometheus=Global&var-environment=gprd&var-type=patroni
- What changes to this metric should prompt a rollback: If we start to allocate more disk linearly we need to rollback.
- Metric for WALs archiving rates: https://thanos.gitlab.net/graph?g0.expr=rate(walg_backup_completed_count%7Bfqdn%3D~%22patroni-main-.*-gprd.c.gitlab-production.internal%22%7D%5B1m%5D)&g0.tab=0&g0.stacked=0&g0.range_input=6h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D
- New pg-upgrade dashboard https://dashboards.gitlab.net/d/pg14-upgrade-main-pg14-upgrade/pg14-upgrade-postgres-upgrade-using-logical?orgId=1&from=now-3h&to=now&var-PROMETHEUS_DS=Global&var-environment=gprd&var-cluster=main
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- Change has been tested in staging and results noted in a comment on this issue.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.