2022-06-14: Staging CI Decomposition with rollback to CI1
Production Change
Change Summary
Today gstg
runs in a single "write" database mode (all read/write queries are executed against main cluster plus usage of CI replicas for reads).
In this change, we will make the gstg
to run in decomposed mode (ie. dual read/write databases - main/CI) for a short period. This will be done by performing the following steps:
- Promote an existing CI cluster to be writable (stopping cascading replication)
- Run checks and install triggers to prevent misplaced writes
- Run QA tests to validate application working properly
- Execute rollback procedure to send CI writes to main and CI reads to a separate CI1 cluster.
The purpose of this is to exercise promotion and rollback procedure on gstg
. At the end of this exercise, we'll have write traffic going to main
and CI reads going to the CI1
cluster. Once the CI cluster is rebuilt, we will route CI reads over to the CI cluster again.
The rebuild of the CI cluster & routing of CI read-only traffic back to the CI cluster will happen in the background (ie. we can bring staging back up while this takes place). It is a zero downtime operation.
Full Change Plan described at https://ops.gitlab.net/gitlab-com/gl-infra/db-migration/-/issues/38
Change Details
- Services Impacted - ServiceWeb ServiceAPI ServiceWebsockets ServiceGit ServiceKAS ServiceGitaly ServiceGitlab Shell ServiceCI Runners ServicePatroni ServicePgbouncer ServicePostgres
- Change Technician - @gsgl
- Change Reviewer - @rhenchen.gitlab
- Time tracking - 3-4 hours
- Downtime Component - Complete Downtime of about 2 hours
Detailed steps for the change
While this change does not affect GitLab.com, following procedure for production, detailed change instructions are available at: https://ops.gitlab.net/gitlab-com/gl-infra/db-migration/-/issues/38
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.