CustomersDot migration to GCP (2022-02-05)
Production Change
Change Summary
This is a migration of the CustomersDot application from Azure to GCP. The main failover issue is https://gitlab.com/gitlab-com/gl-infra/customersdot-ansible-poc/-/issues/82
We already migrated Staging successfully, and currently, the production application is already up & running in GCP:
- Application is running with
customers2.gitlab.com
as the DNS - Deployments are tested and working (we deploy at the same time as Azure production)
- We tested the DB backup/restore as part of https://gitlab.com/gitlab-com/gl-infra/customersdot-ansible-poc/-/issues/75 - being able to backup and restore the Azure production DB into GCP within ~10 minutes.
- We have a dry-run migration for testing - https://gitlab.com/gitlab-com/gl-infra/customersdot-ansible-poc/-/issues/86
Change Details
- Services Impacted - CustomersDot
- Change Technician - @ebaque @vitallium @steveazz
- Change Reviewer - @steveazz
- Time tracking - 120
- Downtime Component - 120
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Estimated Time to Complete (mins) - 5
-
Set label changein-progress on this issue -
Provisioning blocker issue: https://gitlab.com/gitlab-com/gl-infra/customersdot-ansible-poc/-/issues/85 -
Deployment blocker issue: gitlab-org/customers-gitlab-com#3903 (closed) -
Pre-flight checklist issue: https://gitlab.com/gitlab-com/gl-infra/customersdot-ansible-poc/-/issues/83
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 70
Post-Change Steps - steps to take to verify the change
Estimated Time to Complete (mins) - 15
-
Post-failover tasks: https://gitlab.com/gitlab-com/gl-infra/customersdot-ansible-poc/-/issues/82#post-failover
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 30
-
Failback checklist issue: https://gitlab.com/gitlab-com/gl-infra/customersdot-ansible-poc/-/issues/84
Monitoring
Key metrics to observe
We use a self-hosted instance of the Cabot platform as a web interface to monitor our services including the health checks above. Changes to health check statuses are reported to the Slack channel #s_fulfillment_status
.
The primary indicator to initiate the rollback process is the increased error rate in Sentry for the last hour. We don't define any particual threshold except for special type of errors like Invalid Zuora token
which is defined as a critical metric in this document.
-
Metric: IronBank (Zuora) Sentry error rate
- Location: {+https://sentry.gitlab.net/gitlab/customersgitlabcom/dashboard/?statsPeriod=1+}
- What changes to this metric should prompt a rollback: More than 5 errors of type IronBank::Authentications::Token::InvalidAccessToken in a row
-
Metric: GitLab.com (API) Sentry error rate
- Location: {+https://sentry.gitlab.net/gitlab/customersgitlabcom/dashboard/?statsPeriod=1+}
- What changes to this metric should prompt a rollback: More than 5 HTTP 401 errors in a row
-
Metric: Cabot
- Location: {+http://jlo-gitlab.bluegod.net:9092/service/2/+}
- What changes to this metric should prompt a rollback: Any HTTP error code except for the 200 reported for 5 minutes
Summary of infrastructure changes
-
Does this change introduce new compute instances? -
Does this change re-size any existing compute instances? -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs,
Migrating azure instance to GCP, with new hardware and OS version.
Name | Old VM | New VM |
---|---|---|
CPU(s) | 2 | 4 |
Memory | 8 Gb | 16Gb |
HDD | 30 Gb | 100Gb |
-
Puma
is configured as a WEB server by default. We have been running it for long time on staging and it showed good results especially in improved performance. We usedunicorn
before. - PostgreSQL version was bumped from 9.5 to 10. We have been running it for a long time on staging too.
- Ubuntu version upgraded from
16.04
to20.04
.
Summary of the above
Change Reviewer checklist
-
The scheduled day and time of execution of the change is appropriate. -
The change plan is technically accurate. -
The change plan includes estimated timing values based on previous testing. -
The change plan includes a viable rollback plan. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details). -
The change plan includes success measures for all steps/milestones during the execution. -
The change adequately minimizes risk within the environment/service. -
The performance implications of executing the change are well-understood and documented. -
The specified metrics/monitoring dashboards provide sufficient visibility for the change. - If not, is it possible (or necessary) to make changes to observability platforms for added visibility? -
The change has a primary and secondary SRE with knowledge of the details available during the change window.
Change Technician checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall
and this issue and await their acknowledgement.) -
Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers
and this issue and await their acknowledgment.) -
There are currently no active incidents.