[GPRD] Execute logical replication and upgrade test in the main production cluster
Production Change
Change Summary
As discussed in https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/17594#note_1329882796, we want to test most steps of the upcoming PostgreSQL 14 upgrade. We can test the full procedure of creating a new cluster and upgrading it, while receiving production traffic. The only step we are not testing is the switchover to the new cluster.
We're planning to do it on Saturday, April 1st, during the low-activity time (the same time of the week when we'll be running the actual deployment in the future).
During this test, we need DDL silence period – no deployments unless urgent (certain DDLs such as ALTER TABLE would affect our test, so we would need to stop it if it happens) – so we need a temporary silence period for deployments.
The goal of this change is to evaluate the performance impacts and how robust the process is. We will execute the procedure described in the action steps and observe the behavior.
Prior to this we executed #8290 (closed), a similar test in nature, but without the now developed upgrade steps.
Change Details
- Services Impacted - Database
- Change Technician - @alexander-sosna @NikolayS
- Change Reviewer - @NikolayS @bshah11 @vitabaks @rhenchen.gitlab
- Time tracking - 3 hours
- Downtime Component - 0
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 15 min
-
PREPARE -
Set label changein-progress /label ~change::in-progress
-
Coordinate the "DDL silence" period – 4 hours not running DB migrations in production for the main cluster database -
On the jump host ( ssh -A deploy-01-sv-gprd.c.gitlab-production.internal
), get the Ansible [playbook](cd db-migration/pg-upgrade-logical
)upgrade.yml
by cloning the repositoryrm -rf db-migration # delete the playbooks directory if it's already cloned git clone git@gitlab.com:gitlab-com/gl-infra/db-migration.git
-
Go to the working directory cd db-migration/pg-upgrade-logical
-
Test the connection to the hosts before running the playbook: ansible -i inventory/gprd-main.yml all -m ping
-
Dump definition of the 4 postgres_exporter views to recreate their state later /usr/lib/postgresql/12/bin/pg_dump \ gitlabhq_production -h localhost -U gitlab-superuser \ -t postgres_exporter.pg_stat_activity \ -t postgres_exporter.pg_stat_replication \ -t postgres_exporter.pg_stat_statements \ -t postgres_exporter.pg_stat_wal_receiver > production_issue_8611_postgres_exporter_4_views_existing_def.sql
-
Redefine 4 postgres_exporter views to have non-conflicting definition: BEGIN; DROP VIEW postgres_exporter.pg_stat_activity; DROP VIEW postgres_exporter.pg_stat_replication; DROP VIEW postgres_exporter.pg_stat_statements; DROP VIEW postgres_exporter.pg_stat_wal_receiver; CREATE VIEW postgres_exporter.pg_stat_activity AS SELECT f_select_pg_stat_activity.datid, f_select_pg_stat_activity.datname, f_select_pg_stat_activity.pid, f_select_pg_stat_activity.usesysid, f_select_pg_stat_activity.usename, f_select_pg_stat_activity.application_name, f_select_pg_stat_activity.client_addr, f_select_pg_stat_activity.client_hostname, f_select_pg_stat_activity.client_port, f_select_pg_stat_activity.backend_start, f_select_pg_stat_activity.xact_start, f_select_pg_stat_activity.query_start, f_select_pg_stat_activity.state_change, f_select_pg_stat_activity.wait_event_type, f_select_pg_stat_activity.wait_event, f_select_pg_stat_activity.state, f_select_pg_stat_activity.backend_xid, f_select_pg_stat_activity.backend_xmin, f_select_pg_stat_activity.query FROM postgres_exporter.f_select_pg_stat_activity(); CREATE VIEW postgres_exporter.pg_stat_replication AS SELECT f_select_pg_stat_replication.pid, f_select_pg_stat_replication.usesysid, f_select_pg_stat_replication.usename, f_select_pg_stat_replication.application_name, f_select_pg_stat_replication.client_addr, f_select_pg_stat_replication.client_hostname, f_select_pg_stat_replication.client_port, f_select_pg_stat_replication.backend_start, f_select_pg_stat_replication.backend_xmin, f_select_pg_stat_replication.state, f_select_pg_stat_replication.sent_lsn, f_select_pg_stat_replication.write_lsn, f_select_pg_stat_replication.flush_lsn, f_select_pg_stat_replication.replay_lsn, f_select_pg_stat_replication.write_lag, f_select_pg_stat_replication.flush_lag, f_select_pg_stat_replication.replay_lag, f_select_pg_stat_replication.sync_priority, f_select_pg_stat_replication.sync_state FROM postgres_exporter.f_select_pg_stat_replication(); CREATE VIEW postgres_exporter.pg_stat_statements AS SELECT f_select_pg_stat_statements.userid, f_select_pg_stat_statements.dbid, f_select_pg_stat_statements.queryid, f_select_pg_stat_statements.query, f_select_pg_stat_statements.calls, f_select_pg_stat_statements.total_time, f_select_pg_stat_statements.min_time, f_select_pg_stat_statements.max_time, f_select_pg_stat_statements.mean_time, f_select_pg_stat_statements.stddev_time, f_select_pg_stat_statements.rows, f_select_pg_stat_statements.shared_blks_hit, f_select_pg_stat_statements.shared_blks_read, f_select_pg_stat_statements.shared_blks_dirtied, f_select_pg_stat_statements.shared_blks_written, f_select_pg_stat_statements.local_blks_hit, f_select_pg_stat_statements.local_blks_read, f_select_pg_stat_statements.local_blks_dirtied, f_select_pg_stat_statements.local_blks_written, f_select_pg_stat_statements.temp_blks_read, f_select_pg_stat_statements.temp_blks_written, f_select_pg_stat_statements.blk_read_time, f_select_pg_stat_statements.blk_write_time FROM postgres_exporter.f_select_pg_stat_statements(false); CREATE VIEW postgres_exporter.pg_stat_wal_receiver AS SELECT f_pg_stat_wal_receiver.pid, f_pg_stat_wal_receiver.status, f_pg_stat_wal_receiver.receive_start_lsn, f_pg_stat_wal_receiver.receive_start_tli, f_pg_stat_wal_receiver.received_lsn, f_pg_stat_wal_receiver.received_tli, f_pg_stat_wal_receiver.last_msg_send_time, f_pg_stat_wal_receiver.last_msg_receipt_time, f_pg_stat_wal_receiver.latest_end_lsn, f_pg_stat_wal_receiver.latest_end_time, f_pg_stat_wal_receiver.slot_name, f_pg_stat_wal_receiver.sender_host, f_pg_stat_wal_receiver.sender_port, f_pg_stat_wal_receiver.conninfo FROM public.f_pg_stat_wal_receiver(); ALTER TABLE postgres_exporter.pg_stat_activity OWNER TO "gitlab-psql"; ALTER TABLE postgres_exporter.pg_stat_replication OWNER TO "gitlab-superuser"; ALTER TABLE postgres_exporter.pg_stat_statements OWNER TO "gitlab-psql"; ALTER TABLE postgres_exporter.pg_stat_wal_receiver OWNER TO "gitlab-superuser"; GRANT SELECT ON TABLE postgres_exporter.pg_stat_activity TO postgres_exporter; GRANT ALL ON TABLE postgres_exporter.pg_stat_replication TO postgres_exporter; GRANT SELECT ON TABLE postgres_exporter.pg_stat_statements TO postgres_exporter; GRANT ALL ON TABLE postgres_exporter.pg_stat_wal_receiver TO postgres_exporter; COMMIT;
-
-
UPGRADE: Main action – perform the physical-to-logical conversion and upgrade PostgreSQL on the Target cluster. Run Ansible playbook: ansible-playbook -i inventory/gprd-main.yml upgrade.yml -e "pg_old_version=12 pg_new_version=14" | ts | tee -a ansible_upgrade_$(date +%Y%m%d).log
-
RESTORE VIEWS' DEF echo ' BEGIN; DROP VIEW postgres_exporter.pg_stat_activity; DROP VIEW postgres_exporter.pg_stat_replication; DROP VIEW postgres_exporter.pg_stat_statements; DROP VIEW postgres_exporter.pg_stat_wal_receiver; ' > tmp.sql cat tmp.sql production_issue_8611_postgres_exporter_4_views_existing_def.sql > production_issue_8611_postgres_exporter_4_views_existing_def_REVERT.sql echo 'COMMIT;' >> production_issue_8611_postgres_exporter_4_views_existing_def_REVERT.sql rm -rf tmp.sql gitlab-psql < production_issue_8611_postgres_exporter_4_views_existing_def_REVERT.sql
-
CHECK: -
On the current primary ( patroni-main-2004-04-db-gprd.c.gitlab-production.internal
), run in tmux, to log the logical replication details:for i in {1..10800}; do # max duration 4 hours gitlab-psql -tAXc " select now(), pg_wal_lsn_diff(pg_current_wal_lsn(), sent_lsn) as lag_pending_bytes, pg_wal_lsn_diff(sent_lsn, write_lsn) as lag_write_bytes, pg_wal_lsn_diff(write_lsn, flush_lsn) as lag_flush_bytes, pg_wal_lsn_diff(flush_lsn, replay_lsn) as lag_replay_bytes, pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) as lag_total_bytes from pg_stat_replication where application_name = 'logical_subscription' " | sudo tee -a /var/opt/gitlab/production_issue_8611_$(date +%Y%m%d)_pg_stat_replication.log sleep 1 done
-
On the current primary ( patroni-main-2004-04-db-gprd.c.gitlab-production.internal
), run in tmux, for log CPU usage from "walsender" worker that performs logical replication on the source using top:for i in {1..10800}; do # max duration 4 hours top -b -n 1 -p $( gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription'" ) | tail -n 1 | ts | sudo tee -a /var/opt/gitlab/production_issue_8611_$(date +%Y%m%d)_walsender_cpu_usage.log sleep 1 done
-
On the current primary ( patroni-main-2004-04-db-gprd.c.gitlab-production.internal
), run in tmux, for log disk I/O usage from "walsender" worker that performs logical replication on the source using iotop:for i in {1..10800}; do # max duration 4 hours sudo sh -c "iotop -p $( gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription'" ) -k -b -t -n 1 >> /var/opt/gitlab/production_issue_8611_$(date +%Y%m%d)_walsender_iotop.log" sleep 1 done
-
On the target ( patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal
), run in tmux, for log CPU usage from "logical replication worker" that performs logical replication on the target using top:for i in {1..10800}; do # max duration 4 hours top -b -n 1 -p $( ps -aux | grep postgres | grep "logical replication worker" | awk '{print $2}' ) | tail -n 1 | ts | sudo tee -a /var/opt/gitlab/production_issue_8611_$(date +%Y%m%d)_logical_replication_worker_cpu_usage.log sleep 1 done
-
On the target ( patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal
), run in tmux, for log disk I/O usage from "logical replication worker" and other processes that write anything (checkpointer, etc.) usingiotop
:sudo iotop -okbt -n 10800 -d 1 2>&1 \ | sudo tee /var/opt/gitlab/production_issue_8611_$(date +%Y%m%d)_iotop.log
-
On the target ( patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal
), create an extension pg_wait_sampling immediately after completing the playbook:gitlab-psql -c "create extension if not exists pg_wait_sampling"
-
Wait and observe the replication lag, PostgreSQL log, and the checks as above for 4 hours. -
On the target ( patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal
), collect the wait events after the test is completed:# Wait events for logical replication worker (history) gitlab-psql -c "select * from pg_wait_sampling_history where pid = (select pid from pg_stat_activity where backend_type = 'logical replication worker'); " | sudo tee -a /var/opt/gitlab/production_issue_8611_$(date +%Y%m%d)_pg_wait_sampling_logical_replication_worker_history.log # Wait events - summary for logical replication worker gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile where pid = (select pid from pg_stat_activity where backend_type = 'logical replication worker') group by event_type, event order by of_events desc" | sudo tee -a /var/opt/gitlab/production_issue_8611_$(date +%Y%m%d)_pg_wait_sampling_logical_replication_worker_summary.log # Wait events - summary for Postgres gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events from pg_wait_sampling_profile group by event_type, event order by of_events desc" | sudo tee -a /var/opt/gitlab/production_issue_8611_$(date +%Y%m%d)_pg_wait_sampling_profile_postgres_summary.log
-
-
CLEANUP & FINISH: Once all checks are done, stop replication and drop the slot: -
On the target ( patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal
), remove subscription:gitlab-psql -c "DROP SUBSCRIPTION logical_subscription"
-
On the source (current primary: patroni-main-2004-04-db-gprd.c.gitlab-production.internal
), remove publication and replication slot (if exists):gitlab-psql -c "DROP PUBLICATION logical_replication" gitlab-psql -c "SELECT pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'"
-
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 3 min
-
Remove subscription on the target (Standby Cluster leader): sudo gitlab-psql -c "DROP SUBSCRIPTION logical_subscription"
-
Remove publication on the source (Main Cluster leader): sudo gitlab-psql -c "DROP PUBLICATION logical_replication"
, and replication slot (if exists)sudo gitlab-psql -c "SELECT pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'"
-
Stop any observation activities in tmux, if any (such as walsender CPU usage logging) -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric patroni Service Apdex: https://dashboards.gitlab.net/d/patroni-main/patroni-overview?orgId=1
- Metric CPU utilization: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-6h&to=now&viewPanel=13
- Metric for database load: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-3h&to=now&viewPanel=9&var-prometheus=Global&var-environment=gprd&var-type=patroni
- Metric: Disk utilization
- Location: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&viewPanel=10&var-prometheus=Global&var-environment=gprd&var-type=patroni
- What changes to this metric should prompt a rollback: If we start to allocate more disk linearly we need to rollback.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.ff