[GPRD][Sec Decomp] - Phase 4 Rollout - Separate read and write connection for Sec from Main (shared primary host)

Production Change

Change Summary

Summary: In this phase we will gradually enable the sec connection across promoted environments, allowing GitLab Rails to connect to a dedicated pgbouncer-sec deployment.

Once traffic is enabled through pgbouncer-sec we will update our pgbouncer configuration to begin sending read traffic to patroni-sec instead of patroni-main. This will allow us to validate read queries against the standby replica before rolling reads back to patroni-main.

Background: In Phase 4 of decomposition we change the rails application to start using a new connection for read-write queries. This new read-write connection will point to a new set of PGBouncer hosts (called "PGBouncer Sec"). These PGBouncer hosts will still point writes to the main Patroni cluster (as we are not yet fully ready to decompose the Sec database. Once both Read and Write traffic is configured through pgbouncer-sec, we will begin serving reads from the patroni-sec replicas (synced via cascading replication) to validate queries are working as expected.

This step effectively gets us to a point where the application thinks it is reading and writing 2 independent databases. It just happens they are still the same database which reduces risk considerably as there is no possibility of split-brain and we can easily revert if the application runs into bugs with 2 separate connections.

Prior to this we are in Phase 2 (NOTE: Phase 3 has been skipped as a no-op) where GitLab is not utilizing the new Sec Patroni cluster at all.

Accessing the rails and database consoles

Production

  • rails: ssh $USER-rails@console-01-sv-gprd.c.gitlab-production.internal
  • main db replica: ssh $USER-db@console-01-sv-gprd.c.gitlab-production.internal
  • main db primary: ssh $USER-db-primary@console-01-sv-gprd.c.gitlab-production.internal
  • ci db replica: ssh $USER-db-ci@console-01-sv-gprd.c.gitlab-production.internal
  • ci db primary: ssh $USER-db-ci-primary@console-01-sv-gprd.c.gitlab-production.internal
  • main db psql: ssh -t patroni-main-v14-02-db-gprd.c.gitlab-production.internal sudo gitlab-psql
  • ci db psql: ssh -t patroni-ci-v14-02-db-gprd.c.gitlab-production.internal sudo gitlab-psql
  • registry db psql: ssh -t patroni-registry-v14-01-db-gprd.c.gitlab-production.internal sudo gitlab-psql

Dashboards and debugging

These dashboards might be useful during the rollout:

Production

Destination db: sec

Source db: main

Change Details

  1. Services Impacted - ServicePatroni
  2. Change Technician - @jjsisson
  3. Change Reviewer - @bshah11
  4. Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2025-04-02 20:00
  5. Time tracking - 6h
  6. Downtime Component - 0h

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 360

  • Verify with @sre-oncall and @release-managers that there are no blockers in gprd currently
  • Set label changein-progress /label ~change::in-progress
  • Connect to read-only and read-write consoles
    • ssh $USERNAME-rails@console-ro-01-sv-gprd.c.gitlab-production.internal
    • ssh $USERNAME-rails@console-01-sv-gprd.c.gitlab-production.internal
  • Ensure we've already merged https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/5772 and that chef has run and this database is available on the patroni-sec hosts

4.1 Console node rollout

  • Connect to read-only console
    • ssh $USERNAME-rails@console-ro-01-sv-gprd.c.gitlab-staging-1.internal
  • Switchover gprd rails console (teleport) chef connection configuration to new patroni-sec-v16 DB. Writes will go through PGBouncer host to main and reads to sec replicas.
  • run sudo chef-client to sync changes
  • restart SSH session for retesting validation commands

4.1.1 Validation Commands

  1. Simple checks if application sees a proper configuration. Expected: sec load balancer and sec_replica for read connection

    [1] pry(main)> ApplicationRecord.load_balancer.name
    => :main
    [2] pry(main)> Gitlab::Database::SecApplicationRecord.load_balancer.name
    => :sec
    [3] pry(main)> ApplicationRecord.connection.pool.db_config.name
    => "main"
    [4] pry(main)> Gitlab::Database::SecApplicationRecord.connection.pool.db_config.name
    => "sec"
    [5] pry(main)> Gitlab::Database::SecApplicationRecord.load_balancer.read { |connection| connection.pool.db_config.name }
    => "sec_replica"
    [6]  Gitlab::Database::SecApplicationRecord.load_balancer.read_write { |connection| connection.pool.db_config.name }
    => "sec"
  2. Simple checks to see if application can still talk to sec_replica database. Expected: db_config_name:sec_replica

    [10] pry(main)> ActiveRecord::Base.logger = Logger.new(STDOUT)
    [11] pry(main)> Gitlab::Database::SecApplicationRecord.load_balancer.read { |connection| connection.select_all("SELECT COUNT(*) FROM vulnerability_user_mentions") }
      (20.3ms)  SELECT COUNT(*) FROM vulnerability_user_mentions /*application:console,db_config_name:main_replica,line:/data/cache/bundle-2.7.4/ruby/2.7.0/gems/marginalia-1.10.0/lib/marginalia/comment.rb:25:in `block in construct_comment'*/
    => #<ActiveRecord::Result:0x00007fcfc79ccdb0 @column_types={}, @columns=["count"], @hash_rows=nil, @rows=[[1]]>

4.2 Web node canary rollout

4.2.1 Observable logs

All logs will split db_*_count metrics into separate buckets describing each used connection:

4.2.2. Observable prometheus metrics

4.3 Sidekiq node rollout

  • Switchover gprd sidekiq configuration to new pgbouncer-sec
  • Verify connectivity, monitor pgbouncer connections
  • Observe logs and prometheus for errors
  • Execute Database::MonitorLockedTablesWorker.perform_async on the write console to run the table locker code. Check elastic search logs for confirmation of execution.
  • Monitor for signs of increased error rate / and or attempt to interact with production via the UI. If you're able to login or create any records associated with the main DB then there should be no locking concerns.

4.3.1 Observable logs

All logs will split db_*_count metrics into separate buckets describing each used connection:

4.3.2. Observable prometheus metrics

4.4 Web node rollout

4.4.1 Observable logs

All logs will split db_*_count metrics into separate buckets describing each used connection:

4.4.2. Observable prometheus metrics

4.5 Verify read traffic to patroni-sec

  1. ssh to read-write console

    • ssh $USERNAME-rails@console-01-sv-gprd.c.gitlab-production.internal
  2. run sudo chef-client to sync changes

  3. Simple checks if application sees a proper configuration (on read-write console). Expected: sec load balancer and sec_replica for read connection

    [1] pry(main)> ApplicationRecord.load_balancer.name
    => :main
    [2] pry(main)> Gitlab::Database::SecApplicationRecord.load_balancer.name
    => :sec
    [3] pry(main)> ApplicationRecord.connection.pool.db_config.name
    => "main"
    [4] pry(main)> Gitlab::Database::SecApplicationRecord.connection.pool.db_config.name
    => "sec"
    [5] pry(main)> Gitlab::Database::SecApplicationRecord.load_balancer.read { |connection| connection.pool.db_config.name }
    => "sec_replica"
    [6]  Gitlab::Database::SecApplicationRecord.load_balancer.read_write { |connection| connection.pool.db_config.name }
    => "sec"
  • Notify @sre-oncall and @release-managers that CR is completed
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 90

  • Switchover gprd-base and deploy nodes
  • Switchover gprd web configuration back to pgbouncer.int.gprd.gitlab.net
  • Switchover gprd sidekiq configuration back to pgbouncer.int.gprd.gitlab.net
  • Switchover gprd cny configuration back to pgbouncer.int.gprd.gitlab.net
  • Switchover gprd Rails console (teleport) connection configuration back to pgbouncer.int.gprd.gitlab.net
  • Set label changeaborted /label ~change::aborted

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary.

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue. Mention @gitlab-org/saas-platforms/inframanagers in this issue to request approval and provide visibility to all infrastructure managers.
    • Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Jonathon Sisson