[GPRD] Execute the logical replication test in the main production cluster - Second run
<!--
Please review https://about.gitlab.com/handbook/engineering/infrastructure/change-management/ for the most recent information on our change plans and execution policies.
-->
# Production Change
### Change Summary
The goal of this change is to enable logical replication in the production environment, and evaluate the performance impacts and how robust the process is. We will execute the procedure described in the action steps and leave it for 3 hours.
We executed the same test in staging with the CR:
https://gitlab.com/gitlab-com/gl-infra/production/-/issues/8080
### Change Details
1. **Services Impacted** - Database
1. **Change Technician** - @Finotto / @vitabaks
1. **Change Reviewer** - @Finotto / @alexander-sosna / @NikolayS
1. **Time tracking** - 3 hours
1. **Downtime Component** - 0
## Detailed steps for the change
### Change Steps - steps to take to execute the change
*Estimated Time to Complete (mins)* - 15 min
* [ ] PREPARE
* [x] Set label ~"change::in-progress" `/label ~change::in-progress`
* [x] Coordinate the "DDL silence" period – 4 hours not running DB migrations in production for the main cluster database
* [x] On the jump host (`ssh -A deploy-01-sv-gprd.c.gitlab-production.internal`), get the Ansible [playbook](https://gitlab.com/gitlab-com/gl-infra/db-migration/-/tree/master/physical-to-logical) `physical_to_logical.yml` by cloning the repository
```shell
rm -rf db-migration # delete the playbooks directory if it's already cloned
git clone git@gitlab.com:gitlab-com/gl-infra/db-migration.git
```
* [x] Go to the working directory
```shell
cd db-migration/physical-to-logical
```
* [x] Test the connection to the hosts before running the playbook:
```shell
ansible -i inventory/gprd-main.yml all -m ping
```
* [x] CONVERT: Main action – perform the physical-to-logical conversion. Run Ansible playbook:
```shell
ansible-playbook -i inventory/gprd-main.yml physical_to_logical.yml 2>&1 | ts | tee -a ansible.log
```
* [ ] CHECK:
* [x] On the current primary (`patroni-main-2004-04-db-gprd.c.gitlab-production.internal`), run in tmux, to log the logical replication details:
```shell
for i in {1..10800}; do # max duration 4 hours
gitlab-psql -tAXc "
select
now(),
pg_wal_lsn_diff(pg_current_wal_lsn(), sent_lsn) as lag_pending_bytes,
pg_wal_lsn_diff(sent_lsn, write_lsn) as lag_write_bytes,
pg_wal_lsn_diff(write_lsn, flush_lsn) as lag_flush_bytes,
pg_wal_lsn_diff(flush_lsn, replay_lsn) as lag_replay_bytes,
pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) as lag_total_bytes
from pg_stat_replication
where application_name = 'logical_subscription'
" | sudo tee -a /var/opt/gitlab/production_issue_8290_pg_stat_replication.log
sleep 1
done
```
* [x] On the current primary (`patroni-main-2004-04-db-gprd.c.gitlab-production.internal`), run in tmux, for log CPU usage from "walsender" worker that performs logical replication on the source using top:
```shell
for i in {1..10800}; do # max duration 4 hours
top -b -n 1 -p $(
gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription'"
) | tail -n 1 | ts | sudo tee -a /var/opt/gitlab/production_issue_8290_walsender_cpu_usage.log
sleep 1
done
```
* [x] On the current primary (`patroni-main-2004-04-db-gprd.c.gitlab-production.internal`), run in tmux, for log disk I/O usage from "walsender" worker that performs logical replication on the source using iotop:
```shell
for i in {1..10800}; do # max duration 4 hours
sudo sh -c "iotop -p $(
gitlab-psql -tAXc "select pid from pg_stat_replication where application_name = 'logical_subscription'"
) -k -b -t -n 1 >> /var/opt/gitlab/production_issue_8290_walsender_iotop.log"
sleep 1
done
```
* [x] On the target (`patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal`), run in tmux, for log CPU usage from "logical replication worker" that performs logical replication on the target using top:
```shell
for i in {1..10800}; do # max duration 4 hours
top -b -n 1 -p $(
ps -aux | grep postgres | grep "logical replication worker" | awk '{print $2}'
) | tail -n 1 | ts | sudo tee -a /var/opt/gitlab/production_issue_8290_logical_replication_worker_cpu_usage.log
sleep 1
done
```
* [x] On the target (`patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal`), run in tmux, for log disk I/O usage from "logical replication worker" and other processes that write anything (checkpointer, etc.) using `iotop`:
```shell
sudo iotop -okbt -n 10800 -d 1 2>&1 \
| sudo tee /var/opt/gitlab/production_issue_8290_iotop.log
```
* [x] On the target (`patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal`), create an extension pg_wait_sampling immediately after completing the playbook:
```shell
gitlab-psql -c "create extension if not exists pg_wait_sampling"
```
* [x] Wait and observe the replication lag, PostgreSQL log, and the checks as above for 4 hours.
* [x] On the target (`patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal`), collect the wait events after the test is completed:
```shell
# Wait events for logical replication worker (history)
gitlab-psql -c "select * from pg_wait_sampling_history
where pid = (select pid from pg_stat_activity where backend_type = 'logical replication worker');
" | sudo tee -a /var/opt/gitlab/production_issue_8290_pg_wait_sampling_logical_replication_worker_history.log
# Wait events - summary for logical replication worker
gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events
from pg_wait_sampling_profile
where pid = (select pid from pg_stat_activity where backend_type = 'logical replication worker')
group by event_type, event
order by of_events desc" | sudo tee -a /var/opt/gitlab/production_issue_8290_pg_wait_sampling_logical_replication_worker_summary.log
# Wait events - summary for Postgres
gitlab-psql -c "select event_type as wait_type, event as wait_event, sum(count) as of_events
from pg_wait_sampling_profile
group by event_type, event
order by of_events desc" | sudo tee -a /var/opt/gitlab/production_issue_8290_pg_wait_sampling_profile_postgres_summary.log
```
CLEANUP & FINISH: Once all checks are done, stop replication and drop the slot:
* [x] On the target (`patroni-main-logical-pg13-01-db-gprd.c.gitlab-production.internal`), remove subscription:
```shell
gitlab-psql -c "DROP SUBSCRIPTION logical_subscription"
```
* [x] On the source (current primary: `patroni-main-2004-04-db-gprd.c.gitlab-production.internal`), remove publication and replication slot (if exists):
```shell
gitlab-psql -c "DROP PUBLICATION logical_replication"
gitlab-psql -c "SELECT pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'"
```
* [x] Set label ~"change::complete" `/label ~change::complete`
## Rollback
### Rollback steps - steps to be taken in the event of a need to rollback this change
*Estimated Time to Complete (mins)* - 3 min
* [ ] Remove subscription on the target (Standby Cluster leader): `sudo gitlab-psql -c "DROP SUBSCRIPTION logical_subscription"`
* [ ] Remove publication on the source (Main Cluster leader): `sudo gitlab-psql -c "DROP PUBLICATION logical_replication"`, and replication slot (if exists) `sudo gitlab-psql -c "SELECT pg_drop_replication_slot('logical_replication_slot') from pg_replication_slots where slot_name = 'logical_replication_slot'"`
* [ ] Stop any observation activities in tmux, if any (such as walsender CPU usage logging)
* [ ] Set label ~"change::aborted" `/label ~change::aborted`
## Monitoring
### Key metrics to observe
* Metric CPU utilization: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-6h&to=now&viewPanel=13
* Metric for database load: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&from=now-3h&to=now&viewPanel=9
- Metric: Disk utilization
- Location: https://dashboards.gitlab.net/d/000000144/postgresql-overview?orgId=1&viewPanel=10
- What changes to this metric should prompt a rollback: If we start to allocate more disk linearly we need to rollback.
## Change Reviewer checklist
<!--
To be filled out by the reviewer.
-->
~C4 ~C3 ~C2 ~C1:
- [ ] Check if the following applies:
- The **scheduled day and time** of execution of the change is appropriate.
- The [change plan](#detailed-steps-for-the-change) is technically accurate.
- The change plan includes **estimated timing values** based on previous testing.
- The change plan includes a viable [rollback plan](#rollback).
- The specified [metrics/monitoring dashboards](#key-metrics-to-observe) provide sufficient visibility for the change.
~C2 ~C1:
- [ ] Check if the following applies:
- The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels ~"blocks deployments" and/or ~"blocks feature-flags" are applied as necessary
## Change Technician checklist
<!--
To find out who is on-call, in #production channel run: /chatops run oncall production.
-->
- [ ] Check if all items below are complete:
- The [change plan](#detailed-steps-for-the-change) is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the [Production Change Lock periods](https://about.gitlab.com/handbook/engineering/infrastructure/change-management/#production-change-lock-pcl).
- For ~C1 and ~C2 change issues, the change event is added to the [GitLab Production](https://calendar.google.com/calendar/embed?src=gitlab.com_si2ach70eb1j65cnu040m3alq0%40group.calendar.google.com) calendar.
- For ~C1 and ~C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention `@sre-oncall` and this issue and await their acknowledgement.)
- For ~C1 and ~C2 change issues, the SRE on-call provided approval with the ~eoc_approved label on the issue.
- For ~C1 and ~C2 change issues, the Infrastructure Manager provided approval with the ~manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention `@release-managers` and this issue and await their acknowledgment.)
- There are currently no [active incidents](https://gitlab.com/gitlab-com/gl-infra/production/-/issues?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=Incident%3A%3AActive) that are ~severity::1 or ~severity::2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.ff
issue