2025-03-17 - FY26 Q1 HAProxy/Traffic Routing DR Gameday
C2
Production Change - Criticality 2Change Summary
This production issue is to be used for Gamedays as well as recovery in case of a zonal outage. It outlines the steps to be followed when testing traffic shifts due to zonal outages. Hopefully corrective actions from testing will help us build new steps to take during a real outage.
Issue: production-engineering#26182 (closed)
Gameday execution roles and details
Role | Assignee |
---|---|
Change Technician | @pguinoiseau |
Change Reviewer | @ayeung |
- Services Impacted - ServiceHAProxy Environmentgstg
- Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2025-03-17 02:00
- Time tracking - 90 minutes
- Downtime Component - 30 minutes
Restricting traffic to operating in two remaining zones due to a zonal outage in us-east1-b
.
[For Gamedays only] Preparation Tasks
One week before the gameday
-
Add an event to the GitLab Production calendar. -
Make an announcement on the #f_gamedays Slack channel with this template: Next week on [DATE & TIME] we will be executing a Traffic Routing game day. The process will involve moving traffic away from a single zone in `gstg` to test our disaster recovery capabilities and measure if we are still within our RTO & RPO targets set by the [DR working group](https://handbook.gitlab.com/handbook/company/working-groups/disaster-recovery/) for GitLab.com. See <https://gitlab.com/gitlab-com/gl-infra/production/-/issues/17274>
Then cross post the message to the following channels:
-
#g_production_engineering -
#test-platform -
#staging (if applicable) -
#production (if applicable)
-
-
Mention the release managers on the Slack announcement by mentioning @release-managers
and await their approval. -
Request approval from the Infrastructure manager, wait for approval and confirm by the manager_approved label.
Just before the gameday begins
-
Before commencing the change, notify the EOC and release managers on Slack with the following template and wait for their acknoledgement and approval @release-managers or @sre-oncall [LINK_TO_THIS_CR] is scheduled for execution today at [TIME]. We will be diverting traffic away from a single zone ([NAME_OF_ZONE]) in `gstg` to test our disaster recovery capabilities and measure if we are still within our RTO & RPO targets. Kindly review and approve the CR.
Detailed steps for the change
Change Steps - steps to take to execute the change
Execution
-
If you are conducting a practice (Gameday) run of this, consider starting a recording of the process now. -
Note the start time in UTC in a comment to record this process duration. -
Set label changein-progress /label ~change::in-progress
-
Reconfigure the regional cluster to exclude the affected zone by setting regional_cluster_zones
in Terraform to a list of zones that are not impacted-
Create the MR to update the regional_cluster_zones
. While emulating a zonal outage make sure to create replacement nodes. Refer to this example MR. -
Link to the MR: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/10556 https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/10559 #Command to check HAProxy nodes in a particular zone knife search node "name:haproxy* AND chef_environment:gstg AND zone:*<impacted zone>"
-
Get the MR approved -
Merge the MR
-
-
Reconfigure the HAProxy node pools to include the new nodes created in the previous step. -
⚠️ Make sure the MR to update theregional_cluster_zones
in the previous step has been merged, the new nodes provisioned and completed bootstrapping. It can take up to 30 minutes. You can quickly check this by attempting to SSH into the new nodes and checking the bootstrap logs at/var/tmp/bootstrap-*.log
.⚠️ -
Trigger a chef-client run on all HAProxy nodes. knife ssh 'name:haproxy* AND chef_environment:gstg' 'sudo chef-client'
-
-
Remove the HAProxy instances from the GCP load balancers (this must be done AFTER the above terraform change is applied): - Check out the chef-repo repository at: git@gitlab.com:gitlab-com/gl-infra/chef-repo.git
cd chef-repo ./bin/manage-gcp-lb-haproxy
- The script will prompt for an environment, then a zone. Select the values correlating to the zone we are removing traffic from.
-
Disable the HAproxy servers: cd chef-repo ./bin/disable-server gstg <impacted zone>
-
Validate that all servers in the affected zone have their state set to MAINT
. You can use this query to confirm that there are zero backends in theUP
state for the zone that you are evacuating.
-
-
Note the conclusion time in UTC in a comment to record this process duration.
Validation
Once traffic is restricted to our remaining two zones, let's identify the impact and look for problems.
-
Do we see a drop in CPU usage in one zone cluster? GSTG Per Cluster CPU Usage (it may be not noticable when validating on the gstg
environment due to the low traffic) -
Do we see a drop in HPA targets in one zone cluster? GSTG Per Cluster HPA Target (it may be not noticable when validating on the gstg
environment due to the low traffic) -
Examine GSTG Rails logs for errors -
Examine frontend dashboard for GSTG -
Examine the connected peers changed with the new introduced nodes
Wrapping up and cleanup
-
Compile the real time measurement for all the new HAProxy nodes by running the script and comment with the output on this issue:
for node in `knife node list | grep haproxy | grep 101`; do ./runbooks/scripts/find-bootstrap-duration.sh $node ; done
-
Re-enable the zonal GKE backend cluster in HAProxy cd chef-repo ./bin/enable-server gstg <impacted zone>
-
Validate that all servers in the affected zone have their state set to UP
. You can use this query to confirm that there are zero backends in theMAINT
state in the zone that you are returning to service.
-
-
Open a MR to revert the change to disable the zone in the regional cluster. -
Link to the Revert MR: https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/10560 -
Get the MR approved -
Make sure the MR in chef-repo to remove the Gameday HAProxy nodes from peering has been merged and chef client run on the nodes -
Merge the MR
-
-
Trigger a full GSTG config-mgmt pipeline to restore GCP loadbalancer configurations. -
Trigger a chef-client run on all HAProxy nodes: knife ssh 'name:haproxy* AND chef_environment:gstg' 'sudo chef-client'
-
Set label changecomplete /label ~change::complete
-
Notify the @release-managers
and@sre-oncall
that the exercise is complete. -
Compile the real time measurement of this process and update the Recovery Measurements Runbook.
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
It is estimated that this will take 5m to complete
-
Re-enable HAProxy cd chef-repo ./bin/enable-server gstg <impacted zone>
-
Set label changecomplete /label ~change::aborted
-
Notify the @release-managers
and@sre-oncall
that the exercise has been aborted.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed upon with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results are noted in a comment on this issue.
- A dry-run has been conducted and results are noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed before the change is rolled out. (In the #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In the #production channel, mention
@release-managers
and this issue and await their acknowledgement.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.