2022-12-16: Upgrade HAProxy instances to t2d

Production Change

Change Summary

This change will replace HAProxy frontend instances, by switching instance type from c2 to t2d to address CPU scheduling saturation.

https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/16496+

Change Details

  1. Services Impacted - ServiceHAProxy
  2. Change Technician - @f_santos
  3. Change Reviewer - @pguinoiseau
  4. Time tracking - unknown
  5. Downtime Component - none

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - ~12 minutes * 39 instances

for i in {01..39}; do
  echo "### Instance ${i}"
  echo "#### Drain $((${i}+1))"
  ssh "fe-$((${i}+1))-lb-gprd.c.gitlab-production.internal" "sudo /usr/local/sbin/drain_haproxy.sh -w 1"
  echo "#### Drain $((${i}+2))"
  ssh "fe-$((${i}+2))-lb-gprd.c.gitlab-production.internal" "sudo /usr/local/sbin/drain_haproxy.sh -w 1"
  echo "#### Drain ${i}"
  ssh "fe-${i}-lb-gprd.c.gitlab-production.internal" "sudo /usr/local/sbin/drain_haproxy.sh -w 120"
  echo "#### Stop ${i}"
  gcloud compute instances --project=gitlab-production stop "fe-${i}-lb-gprd"
  gcloud compute instances --project=gitlab-production set-machine-type "fe-${i}-lb-gprd" --machine-type "t2d-standard-8"
  gcloud compute instances --project=gitlab-production start "fe-${i}-lb-gprd"
done

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) -

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Filipe Santos