[CR] [gprd] Send a small percentage of traffic to the new HAProxy 2.6
Production Change
Change Summary
In this change, we are going to add the new HAProxy 2.6 to the load balancers for the main, pages, and registry alongside the old HAProxy 1.8 instances. As a result, the GCP LBs send traffic to both old and new HAProxy node pools.
In the first step, we are going to only add 3 nodes from the new HAProxy 2.6 node pool (39 old nodes + 3 new nodes). This will be roughly 7% of the entire fleet. Based on what we have seen in the staging, we expect the new nodes will be serving more traffic due to the fact that they are responding faster.
Change Details
- Services Impacted - ServiceHAProxy
- Change Technician - @miladx
- Change Reviewer - @f_santos
- Time tracking - 60 minutes
- Downtime Component - none
Detailed steps for the change
Prep
-
Notify the Delivery team about this change, risks, and potential disruption as well as the exact time of execution. -
Slack channel: #g_delivery -
Point of contact: @jennykim-gitlab
-
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 60
-
Set label changein-progress /label ~change::in-progress -
Merge https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5826 -
Ensure the post-merge pipeline succeeds.
-
-
Validate the change in gprdenvironment:-
Inspect the gitlab-lbHTTP header in response and make sure the new HAProxy nodes are involved.-
curl -s -I https://gitlab.com/my-public-space/public-repo -
curl -s -I https://gitlab.com/my-public-space
-
-
-
Check out the metrics listed in the monitoring section. -
Run the end-to-end QA tests against the gprdenvironment.-
Run a pipeline on masterbranch here.
-
-
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 20
-
Quick Fix: Shooting down the new HAProxy nodes in the Google Cloud console will cause the GCP load balancers to NOT send any traffic to these nodes. As a result, we only send traffic to the old HAProxy nodes. -
Revert https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/5826 -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
-
Dashboard: HAProxy 2 -
Main
-
Metric: Current number of sessions for the old HAProxy mainnodes- Thanos Query
- Before the traffic split, we expect a varying number of sessions in the order of 11-12K. After the split, we expect to see a drop in the number of sessions.
-
Metric: Current number of sessions for the new HAProxy mainnodes- Thanos Query
- Before the traffic split, we expect to see a number of sessions from the internal load balancers. After the split, we expect to see a jump in the number of sessions.
-
Metric: The rate of frontend request errors - Thanos Query for old nodes
- Thanos Query for new nodes
- We expect to see the same rate of errors before and after for the old and new nodes.
-
Metric: The rate of backend response errors - Thanos Query for old nodes
- Thanos Query for new nodes
- We expect to see the same rate of errors before and after for the old and new nodes.
-
Metric: The rate of server response errors - Thanos Query for old nodes
- Thanos Query for new nodes
- We expect to see the same rate of errors before and after for the old and new nodes.
-
-
Pages
-
Metric: Current number of sessions for the old HAProxy pagesnodes- Thanos Query
- Before the traffic split, we expect a varying number of sessions in the order of 1-2K. After the split, we expect to see a drop in the number of sessions.
-
Metric: Current number of sessions for the new HAProxy pagesnodes- Thanos Query
- Before the traffic split, we expect a straight line at zero. After the split, we expect to see a jump in the number of sessions.
-
Metric: The rate of frontend request errors - Thanos Query for old nodes
- Thanos Query for new nodes
- We expect to see the same rate of errors before and after for the old and new nodes.
-
Metric: The rate of backend response errors - Thanos Query for old nodes
- Thanos Query for new nodes
- We expect to see the same rate of errors before and after for the old and new nodes.
-
Metric: The rate of server response errors - Thanos Query for old nodes
- Thanos Query for new nodes
- We expect to see the same rate of errors before and after for the old and new nodes.
-
-
Registry
-
Metric: Current number of sessions for the old HAProxy registrynodes- Thanos Query
- Before the traffic split, we expect a varying number of sessions in the order of 20-60K. After the split, we expect to see a drop in the number of sessions.
-
Metric: Current number of sessions for the new HAProxy registrynodes- Thanos Query
- Before the traffic split, we expect a straight line at zero. After the split, we expect to see a jump in the number of sessions.
-
Metric: The rate of frontend request errors - Thanos Query for old nodes
- Thanos Query for new nodes
- We expect to see the same rate of errors before and after for the old and new nodes.
-
Metric: The rate of backend response errors - Thanos Query for old nodes
- Thanos Query for new nodes
- We expect to see the same rate of errors before and after for the old and new nodes.
-
Metric: The rate of server response errors - Thanos Query for old nodes
- Thanos Query for new nodes
- We expect to see the same rate of errors before and after for the old and new nodes.
-
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncalland this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Milad Irannejad