[GPRD][Sec Decomp] - Phase 4 Rollback - Swap pgbouncer-sec read traffic back to main
Production Change
Change Summary
Summary: In this phase we need to swap pgbouncer-sec read traffic from patroni-sec back to patroni-main.
Now that both read/write traffic is enabled on pgbouncer-sec we need to update our pgbouncer configuration to begin sending read traffic back to patroni-main instead of patroni-sec. This will ensure we can perform our physical to logical switchover without issue
Background: In Phase 4 of decomposition we change the rails application to start using a new connection for read-write queries. This new read-write connection will point to a new set of PGBouncer hosts (called "PGBouncer Sec"). These PGBouncer hosts will still point writes to the main Patroni cluster (as we are not yet fully ready to decompose the Sec database. Once both Read and Write traffic is configured through pgbouncer-sec, we began serving reads from the patroni-sec replicas (synced via cascading replication) to validate queries are working as expected.
Accessing the rails and database consoles
Production
- rails:
ssh $USER-rails@console-01-sv-gprd.c.gitlab-production.internal - main db replica:
ssh $USER-db@console-01-sv-gprd.c.gitlab-production.internal - main db primary:
ssh $USER-db-primary@console-01-sv-gprd.c.gitlab-production.internal - ci db replica:
ssh $USER-db-ci@console-01-sv-gprd.c.gitlab-production.internal - ci db primary:
ssh $USER-db-ci-primary@console-01-sv-gprd.c.gitlab-production.internal - main db psql:
ssh -t patroni-main-v14-02-db-gprd.c.gitlab-production.internal sudo gitlab-psql - ci db psql:
ssh -t patroni-ci-v14-02-db-gprd.c.gitlab-production.internal sudo gitlab-psql - registry db psql:
ssh -t patroni-registry-v14-01-db-gprd.c.gitlab-production.internal sudo gitlab-psql
Dashboards and debugging
These dashboards might be useful during the rollout:
Production
- Patroni-sec Dashboard
- Database Decomposition overview
- PostgreSQL replication overview
- Triage overview
- Sidekiq overview
- Sentry - includes application errors
- Logs (Kibana)
Destination db: sec
- monitoring_pgbouncer_gitlab_user_conns
- monitoring_chef_client_enabled
- monitoring_chef_client_last_run
- monitoring_chef_client_error
- monitoring_snapshot_last_run
- monitoring_user_tables_writes
- monitoring_user_tables_reads
- monitoring_gitlab_maintenance_mode
Source db: main
- monitoring_pgbouncer_gitlab_user_conns
- monitoring_chef_client_enabled
- monitoring_chef_client_last_run
- monitoring_chef_client_error
- monitoring_snapshot_last_run
- monitoring_user_tables_writes
- monitoring_user_tables_reads
- monitoring_gitlab_maintenance_mode
Change Details
- Services Impacted - ServicePatroni ServicePatroniSec
- Change Technician - @jjsisson
- Change Reviewer - @zbraddock
- Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2025-04-03 22:00
- Time tracking - 3h
- Downtime Component - 0h
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 120
-
Verify with @sre-oncalland@release-managersthat there are no blockers in gprd currently -
Set label changein-progress /label ~change::in-progress
4.1 k8s-workload node rollout - sidekiq and cny
Note this is a combined phase from the previous phases 4.2 to 4.4
-
Switchover gprd configuration to new pgbouncer-sec-
merge k8s-workload MR
-
-
Verify connectivity, monitor pgbouncer connections -
Observe logsandprometheusfor errors
4.2 k8s-workload node rollout - web/global
Note this is a combined phase from the previous phases 4.2 to 4.4
-
Switchover gprd configuration to new pgbouncer-sec-
merge k8s-workload MR
-
-
Verify connectivity, monitor pgbouncer connections -
Observe logsandprometheusfor errors -
Promote chef database connection configuration to gprd-base) setting secto newpatroni-main-v16DB. Writes will continue to go through PGBouncer host tomainand reads now sent tomainreplicas.-
merge chef MR
-
4.2.1 Observable logs
All logs will split db_*_count metrics into separate buckets describing each used connection:
-
Ensure json.db_sec_countlogs are present
4.2.2. Observable prometheus metrics
-
Primary connection usage by state -
pg_stat_activity_count -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 90
-
Switchover gprd deploy node config back -
revert chef-repo MR
-
-
Switchover gprd configuration back to pgbouncer-sec.int.gprd.gitlab.net-
revert k8s-workload MR -
revert k8s-workload MR
-
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric:
Patroni Overview- Location: https://dashboards.gitlab.net/d/patroni-main/patroni3a-overview?orgId=1
- What changes to these metrics should prompt a rollback: Observations of sustained AppDex violations
- Metric:
patroni-sec Service Error Ratio- Location: Patroni-sec dashboard
- What changes to this metric should prompt a rollback: Sustained increased error rate
- Metric:
Connection Saturation per Pool- Location: Patroni-sec dashboard
- What changes to this metric should prompt a rollback: Connection Saturation approaching 1.
- Metric:
pgbouncer-sec Service Error Ratio- Location: Pgbouncer-sec dashboard
- What changes to this metric should prompt a rollback: Sustained pool_size exhaustion
- Metric:
pgbouncer CPU saturation- Location: pgbouncer query
- What changes to this metric should prompt a rollback: Sustained CPU saturation
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue. Mention
@gitlab-org/saas-platforms/inframanagersin this issue to request approval and provide visibility to all infrastructure managers. - Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.