[CR] [gprd] Send 50% of main traffic to HAProxy 2.x
Production Change
Change Summary
In this change, all 39 new haproxy-main-* nodes will be added to the external Google Cloud load balancer along the existing 39 old fe-* nodes.
The reason I decided to add all the new nodes in one step and split the external traffic 50-50 between the old and new HAProxy nodes is based on previous CRs in which I tried to add only a few haproxy-main-* nodes. When we add a small number of nodes (i.e. 6), we see that roughly 50% of the total traffic comes through these new nodes! This quickly saturates the new nodes causing a number of issues. The rate of errors increases, the average response time increases, the queue length grows, the healtchecks start to fail, the logging buffers overflow, etc.
I have analyzed the last unsuccessful CR and the resulting incidents (1, 2) thoroughly. As a result, I have prepared all the metrics and dashboards to monitor the critical signals early in advance, so I can revert this change before it causes a considerable impact on our error budget.
In this CR, I'm adding 15 more haproxy-main-* nodes to the external load balancer for the main which currently has 39 old fe-* and 24 haproxy-main-* new nodes. This will bring the total amount of traffic going to new 2.x nodes to 50%.
Change Details
- Services Impacted - ServiceHAProxy
- Change Technician - @miladx
- Change Reviewer - @mchacon3
- Time tracking - 30 minutes
- Downtime Component - none
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 30
-
Set label changein-progress /label ~change::in-progress -
Merge https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/6682 -
Ensure the post-merge pipelines all succeed.
-
-
Validate the change in gprdenvironment:-
Inspect the gitlab-lbHTTP header in the response and make sure the new HAProxy nodes are involved.-
curl -s -I https://gitlab.com/my-public-space -
curl -s -I https://gitlab.com/my-public-space/public-repo
-
-
-
Check out the metrics listed in the monitoring section. -
Run the end-to-end QA tests against the gprdenvironment.-
Run a pipeline on masterbranch here.
-
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 10
-
Quick Fix: Shooting down the new HAProxy nodes in the Google Cloud console will cause the GCP load balancers to NOT send any traffic to these nodes. As a result, we only send traffic to the old HAProxy nodes. -
Revert https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/6682 -
Set label changeaborted /label ~change::aborted
Monitoring
-
Dashboards -
Logs
Key metrics to observe
-
🚨 Metric: Load Balancer Error SLI- Thanos Graph
- We do NOT expect any major change in the current trends (it should be less or around
0.01). -
⚠ ️ Any increase, jump, or major change in the current values. 
-
Metric: Current number of sessions - Frontend Sessions
- Backend Sessions
- Server Sessions
- We expect a decrease in the number of sessions on the old nodes and a proportional increase on the new nodes.
-
⚠ ️ If we start to see a disproportional influx of sessions on the new nodes.
-
Metric: The rate of the total number of responses - Frontend Total Responses
- Backend Total Responses
- Server Total Responses
- We expect a decrease in the rate of responses on the old nodes and a proportional increase on the new nodes.
- You may want to switch between absolute values and/or rate values.
-
⚠ ️ Any sudden increase or jump in the rate of total responses. 
-
Metric: The rate of the number of response errors - Backend Response Errors
- Server Response Errors
- We expect a decrease in the rate of response errors on the old nodes and a proportional increase on the new nodes.
- You may want to switch between absolute values and/or rate values.
-
⚠ ️ Any sudden increase or jump in the rate of response errors. 
-
Metric: The average response time of the last 1024 successful connections - Backend Average Response Time
- Server Average Response Time
-
⚠ ️ Look for any abnormal pattern like the following. 
-
Metric: The average queue time for the last 1024 successful connections - Backend Average Queue Time
- Server Average Queue Time
-
⚠ ️ Look for any abnormal pattern like the following. 
-
Metric: Gitaly request and concurrency metrics - Thanos Graph
- We do NOT expect any significant change in the current trends.
-
⚠ ️ If we start to see an increase in the no. ofResourceExhaustedgRPC code in the first graph or a constant increase in the new two graphs.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.




