Skip to content

[GPRD] Execute the logical replication test in the main production cluster - Second run

Production Change

Change Summary

The goal of this change is to enable logical replication in the production environment, and evaluate the performance impacts and how robust the process is. We will execute the procedure described in the action steps and leave it for 3 hours.

We executed the same test in staging with the CR: #8080 (closed)

Change Details

  1. Services Impacted - Database
  2. Change Technician - @Finotto / @vitabaks
  3. Change Reviewer - @Finotto / @alexander-sosna / @NikolayS
  4. Time tracking - 3 hours
  5. Downtime Component - 0

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 15 min

  • PREPARE

    • Set label changein-progress /label ~change::in-progress

    • Coordinate the "DDL silence" period – 4 hours not running DB migrations in production for the main cluster database

    • On the jump host (ssh -A deploy-01-sv-gprd.c.gitlab-production.internal), get the Ansible playbook physical_to_logical.yml by cloning the repository

      rm -rf db-migration # delete the playbooks directory if it's already cloned
      git clone git@gitlab.com:gitlab-com/gl-infra/db-migration.git
    • Go to the working directory

      cd db-migration/physical-to-logical
    • Test the connection to the hosts before running the playbook:

      ansible -i inventory/gprd-main.yml all -m ping
  • CONVERT: Main action – perform the physical-to-logical conversion. Run Ansible playbook:

    ansible-playbook -i inventory/gprd-main.yml physical_to_logical.yml 2>&1 | ts | tee -a ansible.log
  • CHECK:

    • On the current primary (patroni-main-2004-04-db-gprd.c.gitlab-production.internal), run in tmux, to log the logical replication details:
      for i in {1..10800}; do # max duration 4 hours
          gitlab-psql -tAXc "
            select
              now(),
              pg_wal_lsn_diff(pg_current_wal_lsn(), sent_lsn) as lag_pending_bytes,
              pg_wal_lsn_diff(sent_lsn, write_lsn) as lag_write_bytes,
              pg_wal_lsn_diff(write_lsn, flush_lsn) as lag_flush_bytes,
              pg_wal_lsn_diff(flush_lsn, replay_lsn) as lag_replay_bytes,
              pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) as lag_total_bytes
            from pg_stat_replication
            where application_name = 'logical_subscription'    
          " | sudo tee -a /var/opt/gitlab/production_issue_8290_pg_stat_replication.log
        sleep 1
      done
    • On the current primary (patroni-main-2004-04-db-gprd.c.gitlab-production.internal), run in tmux, for log CPU usage from "walsender" worker that performs logical replication on the source using top:
      for i in {1..10800}; do # max duration 4 hours
        top -b -n 1 -p $(
          gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription'"
        ) | tail -n 1 | ts | sudo tee -a /var/opt/gitlab/production_issue_8290_walsender_cpu_usage.log
        sleep 1
      done
    • On the current primary (patroni-main-2004-04-db-gprd.c.gitlab-production.internal), run in tmux, for log disk I/O usage from "walsender" worker that performs logical replication on the source using iotop:
      for i in {1..10800}; do # max duration 4 hours
        sudo sh -c "iotop -p $(
          gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription'"
        ) -k -b -t -n 1 >> /var/opt/gitlab/production_issue_8290_walsender_iotop.log"
        sleep 1
      done
    • On the target (patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal), run in tmux, for log CPU usage from "logical replication worker" that performs logical replication on the target using top:
      for i in {1..10800}; do # max duration 4 hours
        top -b -n 1 -p $(
          ps -aux | grep postgres | grep "logical replication worker" | awk '{print $2}'
        ) | tail -n 1 | ts | sudo tee -a /var/opt/gitlab/production_issue_8290_logical_replication_worker_cpu_usage.log
        sleep 1
      done
    • On the target (patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal), run in tmux, for log disk I/O usage from "logical replication worker" and other processes that write anything (checkpointer, etc.) using iotop:
      sudo iotop -okbt -n 10800 -d 1 2>&1 \
        | sudo tee /var/opt/gitlab/production_issue_8290_iotop.log
    • On the target (patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal), create an extension pg_wait_sampling immediately after completing the playbook:
      gitlab-psql -c "create extension if not exists pg_wait_sampling"
    • Wait and observe the replication lag, PostgreSQL log, and the checks as above for 4 hours.
    • On the target (patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal), collect the wait events after the test is completed:
      # Wait events for logical replication worker (history)
      gitlab-psql -c "select * from pg_wait_sampling_history
      where pid = (select pid from pg_stat_activity where backend_type = 'logical replication worker');
      " | sudo tee -a /var/opt/gitlab/production_issue_8290_pg_wait_sampling_logical_replication_worker_history.log
      
      # Wait events - summary for logical replication worker
      gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events 
      from pg_wait_sampling_profile
      where pid = (select pid from pg_stat_activity where backend_type = 'logical replication worker')
      group by event_type, event
      order by of_events desc" | sudo tee -a /var/opt/gitlab/production_issue_8290_pg_wait_sampling_logical_replication_worker_summary.log
      
      # Wait events - summary for Postgres
      gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events 
      from pg_wait_sampling_profile
      group by event_type, event
      order by of_events desc" | sudo tee -a /var/opt/gitlab/production_issue_8290_pg_wait_sampling_profile_postgres_summary.log

CLEANUP & FINISH: Once all checks are done, stop replication and drop the slot:

  • On the target (patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal), remove subscription:

    gitlab-psql -c "DROP SUBSCRIPTION logical_subscription"
  • On the source (current primary: patroni-main-2004-04-db-gprd.c.gitlab-production.internal), remove publication and replication slot (if exists):

    gitlab-psql -c "DROP PUBLICATION logical_replication"
    gitlab-psql -c "SELECT pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'"
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 3 min

  • Remove subscription on the target (Standby Cluster leader): sudo gitlab-psql -c "DROP SUBSCRIPTION logical_subscription"
  • Remove publication on the source (Main Cluster leader): sudo gitlab-psql -c "DROP PUBLICATION logical_replication", and replication slot (if exists) sudo gitlab-psql -c "SELECT pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'"
  • Stop any observation activities in tmux, if any (such as walsender CPU usage logging)
  • Set label changeaborted /label ~change::aborted

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.ff
Edited by Jose Finotto