[CR] [gprd] Send a small percentage of traffic to the new HAProxy 2.6

Production Change

Change Summary

In this change, we are going to add the new HAProxy 2.6 to the load balancers for the main, pages, and registry alongside the old HAProxy 1.8 instances. As a result, the GCP LBs send traffic to both old and new HAProxy node pools.

In the first step, we are going to only add 3 nodes from the new HAProxy 2.6 node pool (39 old nodes + 3 new nodes). This will be roughly 7% of the entire fleet. Based on what we have seen in the staging, we expect the new nodes will be serving more traffic due to the fact that they are responding faster.

Change Details

  1. Services Impacted - ServiceHAProxy
  2. Change Technician - @miladx
  3. Change Reviewer - @f_santos
  4. Time tracking - 60 minutes
  5. Downtime Component - none

Detailed steps for the change

Prep

  • Notify the Delivery team about this change, risks, and potential disruption as well as the exact time of execution.

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 60

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 20

Monitoring

Key metrics to observe

  • Dashboard: HAProxy 2
  • Main
    • Metric: Current number of sessions for the old HAProxy main nodes
      • Thanos Query
      • Before the traffic split, we expect a varying number of sessions in the order of 11-12K. After the split, we expect to see a drop in the number of sessions.
    • Metric: Current number of sessions for the new HAProxy main nodes
      • Thanos Query
      • Before the traffic split, we expect to see a number of sessions from the internal load balancers. After the split, we expect to see a jump in the number of sessions.
    • Metric: The rate of frontend request errors
    • Metric: The rate of backend response errors
    • Metric: The rate of server response errors
  • Pages
    • Metric: Current number of sessions for the old HAProxy pages nodes
      • Thanos Query
      • Before the traffic split, we expect a varying number of sessions in the order of 1-2K. After the split, we expect to see a drop in the number of sessions.
    • Metric: Current number of sessions for the new HAProxy pages nodes
      • Thanos Query
      • Before the traffic split, we expect a straight line at zero. After the split, we expect to see a jump in the number of sessions.
    • Metric: The rate of frontend request errors
    • Metric: The rate of backend response errors
    • Metric: The rate of server response errors
  • Registry
    • Metric: Current number of sessions for the old HAProxy registry nodes
      • Thanos Query
      • Before the traffic split, we expect a varying number of sessions in the order of 20-60K. After the split, we expect to see a drop in the number of sessions.
    • Metric: Current number of sessions for the new HAProxy registry nodes
      • Thanos Query
      • Before the traffic split, we expect a straight line at zero. After the split, we expect to see a jump in the number of sessions.
    • Metric: The rate of frontend request errors
    • Metric: The rate of backend response errors
    • Metric: The rate of server response errors

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Milad Irannejad