[WIP]Rollout logical replication in staging - with process updated
Production Change
Change Summary
The goal of this change is to enable logical replication on staging, evaluate the performance impacts and how robust the process is. We will execute the procedure described in the action steps and leave it for 72 hours.
Change Details
- Services Impacted - Database
- Change Technician - @vitabaks
- Change Reviewer - @Finotto / @alexander-sosna
- Time tracking - 2 hours
- Downtime Component - 0
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - Estimated Time to Complete in Minutes
-
Set label changein-progress /label ~change::in-progress
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 15 min
-
PREPARE -
Set label changein-progress /label ~change::in-progress
Coordinate the "DDL silence" period – several hours not running DB migrations in gstg SPECIFY HOW MANY HOURS PUT CONCRETE STEPS HOW TO IMPLEMENT-
On the jump host (PUT HOST HERE), get the Ansible Playbook physical_to_logical.yml
by cloning the repositorygit clone git@gitlab.com:gitlab-com/gl-infra/db-migration.git
(if it's already cloned, ensure that it's inmaster
and the code is up to date:git checkout master; git pull; git status
) FIX REPOS ON BOTH INSTANCES TO MATCH -
Go to the working directory cd db-migration/physical-to-logical
-
Test the connection to the hosts before running the playbook: ansible -i inventory/gstg.yml all -m ping
-
(gstg only! do not copy this to gprd) In tmux, in a separate tab/pane, run this to generate some workload on the source (will last ~3 hours), to guarantee that there is activity on the source to test the replication process – and leave it running (estimated duration: ~3h): gitlab-psql -c "drop table if exists postgres_logical_test_issue_production_8080; create table postgres_logical_test_issue_production_8080 (id bigserial, data text);" for i in {1..108000}; do # timeout 3 hours gitlab-psql -c "insert into postgres_logical_test_issue_production_8080 (data) select md5(random()::text)" echo $(date); sleep 0.1s done; gitlab-psql -c "drop table postgres_logical_test_issue_production_8080"
-
-
CONVERT: Main action – perform the physical-to-logical conversion. Run Ansible playbook: ansible-playbook -i inventory/gstg.yml physical_to_logical.yml
-
CHECK-1: -
On current primary ( patroni-main-2004-02-db-gstg.c.gitlab-staging-1.internal
), make sure that the logical replication sync is complete and that there is no high replication lag, on the source – the value returned should be not empty, not a NULL, and low (<1 GiB):gitlab-psql -c "select pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), confirmed_flush_lsn)) as logical_replication_slot_lag from pg_replication_slots where slot_name = 'logical_replication_slot'"
-
Log CPU usage from walsender
worker that performs logical replication on the source:while sleep 10; do pidstat -h -u -p $( sudo gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription'" ) done 2>&1 | ts | sudo tee -a /var/log/gitlab/production_issue_8080_walsender_pidstat.log
-
Wait and observe the lag and walsender
CPU usage (same checks as above) for XX hours SPECIFY HOW MANY HOURS TO OBSERVE
-
-
CHECK-2: Check alert -
On the target ( patroni-main-logical-pg13-01-db-gstg.c.gitlab-staging-1.internal
), suspend logical replication:gitlab-psql -c "ALTER SUBSCRIPTION logical_subscription DISABLE"
-
Wait till the lag observed on the source (primary) reaches XX GiB SPECIFY THE VALUE ADD URL TO DASHBOARD SHOWING LAG -
Make sure that the alarm about high replication lag is triggered -
On target ( patroni-main-logical-pg13-01-db-gstg.c.gitlab-staging-1.internal
), after checking the alarm, we can resume logical replication : DECIDE IF WE DO NEED ITgitlab-psql -c "ALTER SUBSCRIPTION logical_subscription ENABLE"
-
-
CLEANUP & FINISH: Once all checks are done, stop replication and drop the slot: -
On the target ( patroni-main-logical-pg13-01-db-gstg.c.gitlab-staging-1.internal
), remove subscription:sudo gitlab-psql -c "DROP SUBSCRIPTION logical_subscription"
-
On the source (current primary: patroni-main-2004-02-db-gstg.c.gitlab-staging-1.internal
), stop any observation activities in tmux, if any (such as walsender CPU usage logging) -
On the source (current primary: patroni-main-2004-02-db-gstg.c.gitlab-staging-1.internal
), remove publication and replication slot (if exists):sudo gitlab-psql -c "DROP PUBLICATION logical_replication" sudo gitlab-psql -c "SELECT pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'"
-
Set label changecomplete /label ~change::complete
-
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 3 min
-
Stop any observation activities in tmux, if any (such as walsender CPU usage logging) -
Remove subscription on the target (Standby Cluster leader): sudo gitlab-psql -c "DROP SUBSCRIPTION logical_subscription"
-
Remove publication on the source (Main Cluster leader): sudo gitlab-psql -c "DROP PUBLICATION logical_replication"
, and replication slot (if exists)sudo gitlab-psql -c "SELECT pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'"
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Metric CPU utilization: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-6h&to=now&viewPanel=13
-
Metric for database load: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-3h&to=now&viewPanel=9
- Metric: Disk utilization
- Location: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&viewPanel=10
- What changes to this metric should prompt a rollback: If we start to allocate more disk linearly we need to rollback.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.ff
Edited by Jose Finotto