[GSTG] Test zonal maintenance steps with HAProxy
Production Change
Change Summary
During the 2024-04-15 Gameday practice, it was noted that when Canary was drained in GSTG, and HAProxy VMs in a single zone were placed in maintenance, the application reported several 503 errors. This change is designed to provide the chance to verify if Canary is related to this, and to perform more troubleshooting to try and understand why traffic was reaching an HAProxy VM that was not active in the Google TCP load balancer.
I have tried to lay out a plan of testing that should help answer some questions, but there might be additional steps required to look around for the traffic source and reason it's returning 503s that are not listed in the steps below.
Change Details
- Services Impacted - ServiceHAProxy
- Change Technician - @cmcfarland
- Change Reviewer - @swainaina
- Time tracking - 45
- Downtime Component - N/A for GPRD, but this could cause a degradation in GSTG for a short time (30m).
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 30
-
Set label changein-progress /label ~change::in-progress
-
"Disable" the us-east1-b HAProxy by putting it into maintenance mode from the chef-repo project: ./bin/set-server-state -z b gstg maint
-
Check for 503 responses -
In Cloudflare -
With web browser: https://staging.gitlab.com/gitlab-com/operations/-/issues/42
-
With curl: curl -v https://staging.gitlab.com/gitlab-com/operations/-/issues/42 > /dev/null
-
Grepping haproxy logs on haproxy-main-03-lb-gstg.c.gitlab-staging-1.internal
-
-
If no 503's are detected, also drain Canary in GSTG: /chatops run canary --disable --staging
-
Re-visit checking for 503 responses above. -
Once 503 errors are detected at a higher than normal volume, if traffic is still being processed by HAProxy, verify that the backend is disabled in the following TCP Load Balancers. When the backends are down, we expect haproxy 03 to be marked as unavailable on these pages: -
Conduct any additional troubleshooting to help understand the nature of the 503 responses. -
"Enable" the us-east1-b HAProxy by removing it from maintenance mode from the chef-repo project: ./bin/set-server-state -z b gstg ready
-
Re-enable Canary in GSTG: /chatops run canary --enable --staging
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 5
-
"Enable" the us-east1-b HAProxy by removing it from maintenance mode from the chef-repo project: ./bin/set-server-state -z b gstg ready
-
Re-enable Canary in GSTG: /chatops run canary --enable --staging
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Metric Name
- Location: Dashboard URL
- What changes to this metric should prompt a rollback: Describe Changes
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.