Skip to content

2023-04-12: Drain us-east1-d for registry.gitlab.com

Production Change

Change Summary

We have an ongoing incident in which registry PATCH requests are degraded in us-east-1d.

This CR is to drain the registry-us-east1-d servers from the registry haproxy to attempt to mitigate the issue.

Change Details

  1. Services Impacted - ServiceRegistry
  2. Change Technician - @nduff
  3. Change Reviewer - @f_santos
  4. Time tracking - 5 minutes
  5. Downtime Component - none

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 5

  • Set label changein-progress /label ~change::in-progress
  • Create alertmanager silence for alert_class="traffic_cessation" + feature_category="container_registry" + env="gprd.
    • This may not catch all alerts but we can update as needed.
  • Drain the haproxy nodes
    • knife ssh -p 2222 "roles:gprd-base-lb-registry" "echo set server registry/registry-us-east1-d state drain | sudo socat stdio /run/haproxy/admin.sock"
  • Drain cny
    • knife ssh -p 2222 "roles:gprd-base-lb-registry" "echo set server registry/gke-cny-registry state drain | sudo socat stdio /run/haproxy/admin.sock"
  • Set label changecomplete /label ~change::complete

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 5

  • Set backends back to ready state
    • knife ssh -p 2222 "roles:gprd-base-lb-registry" "echo set server registry/registry-us-east1-d state ready | sudo socat stdio /run/haproxy/admin.sock"
    • knife ssh -p 2222 "roles:gprd-base-lb-registry" "echo set server registry/gke-cny-registry state ready | sudo socat stdio /run/haproxy/admin.sock"
  • (Optional) Check the backend status on a given node sudo hatop -s /run/haproxy/admin.sock
  • Set label changeaborted /label ~change::aborted

Monitoring

Key metrics to observe

  • Metric: Registry RPS

  • Metric: Registry HPA / Pod Scaling

    • Location: Thanos Dashboard
    • What changes to this metric should prompt a rollback: We have a pod limit of 90 for our HPA. As we are draining 1 zone we expect traffic to increase to the remaining two zones, which will likely result in pod numbers scaling. At the moment we are well below limits and should not require any adjustments to the HPA limits. If we start hitting the HPA limit for any given zone we should look to increase the HPA value before making a rollback decision.

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Nick Duff