[CR] [gprd] Send 15% of main traffic to HAProxy 2.x

Production Change

Change Summary

In this CR, I'm adding 6 new haproxy-main-* nodes to the external load balancer for the main which currently has 39old fe-* nodes. This will bring the total amount of traffic going to new 2.x nodes to 15%.

Change Details

  1. Services Impacted - ServiceHAProxy
  2. Change Technician - @miladx
  3. Change Reviewer - @mchacon3
  4. Time tracking - 30 minutes
  5. Downtime Component - none

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 30

  • Set label changein-progress /label ~change::in-progress
  • Merge https://ops.gitlab.net/gitlab-com/gl-infra/config-mgmt/-/merge_requests/6679
    • Ensure the post-merge pipelines all succeed.
  • Validate the change in gprd environment:
    • Inspect the gitlab-lb HTTP header in the response and make sure the new HAProxy nodes are involved.
      • curl -s -I https://gitlab.com/my-public-space
      • curl -s -I https://gitlab.com/my-public-space/public-repo
  • Check out the metrics listed in the monitoring section.
  • Run the end-to-end QA tests against the gprd environment.
    • Run a pipeline on master branch here.
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 10

Monitoring

Key metrics to observe

  • Metric: Gitaly request and concurrency metrics
    • Thanos Graph
    • We do NOT expect any significant change in the current trends.
    • If we start to see an increase in the no. of ResourceExhausted gRPC code in the first graph or a constant increase in the new two graphs, that's a warning signal!
  • 🔎 Metric: The total number of response bytes for backend and server
  • Metric: Current number of sessions for the old HAProxy main nodes
    • Thanos Graph
    • We expect a decrease in the number of sessions on the old nodes and a proportional increase on the new nodes.
  • Metric: The rate of frontend request errors
    • Thanos Graph
    • We expect to see the same rate of errors before and after for the old and new nodes.
  • Metric: The rate of backend response errors
    • Thanos Graph
    • We expect to see the same rate of errors before and after for the old and new nodes.
  • Metric: The rate of server response errors
    • Thanos Graph
    • We expect to see the same rate of errors before and after for the old and new nodes.
  • Metric: The average response time of the last 1024 successful connections
    • Thanos Graph
    • We do NOT expect any significant increase in the response times.
  • Metric: The average queue time for the last 1024 successful connections
    • Thanos Graph
    • We do NOT expect any major increase in the queue times.

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary.

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Milad Irannejad