Rollout usage of ci-gateway ILB for on shared runners

Production Change

Change Summary

Finalization of https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/14874 - step 2/2.

While working on a Rapid Action to reduce networking cost caused by CI Runners we've been left with one last step: migrate shared runners to use the internal load balance for Runner API and Git communication. That migration was blocked by an edge case that we've found is causing problems in some number of customer jobs.

We've found a solution for this problem and are going to deploy it now.

This change will reconfigure shared runners and enable usage of the ci-gateway ILB on them.

The change was tested on runners for staging.gitlab.com - it's present there since few weeks and we didn't see it causing any problems. We've also updated private and shared-gitlab-org runners to the full version of the configuration with #7205 (closed).

Change Details

  1. Services Impacted - ServiceCI Runners
  2. Change Technician - @rehab
  3. Change Reviewer - @tmaczukin
  4. Time tracking - 60 + time of waiting to see if we don't get any reports
  5. Downtime Component - no downtime expected

Detailed steps for the change

Change Steps - steps to take to execute the change

Estimated Time to Complete (mins) - 30 + time of waiting to see if we don't get any reports

  • Set label changein-progress /label ~change::in-progress

  • Stop chef-client on all active shared (blue or green) runners

  • Merge https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/3596

  • Wait for CI to upload changes to the chef server

  • Force chef-client run on the first two active (blue or green) shared runner managers

  • Test with https://gitlab.com/tmaczukin-test-projects/test-git.ci-gateway-usage/-/pipelines (with the jobs referencing shared)

  • Notify #production and #support_gitlab-com that this change is initiated

    Hey team! To finalize CI networking cost rapid action leftovers we've initiated
    a change rollout for https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7206.
    
    With this 40% of CI/CD jobs executed on the `shared` runners will use internal networking
    for the initial `git fetch` execution. The change should be fully transparent for the
    customers and we don't expect it to affect their jobs, but be aware that such change
    have happened.
  • Wait few hours to make sure that we're not getting any reports about unexpected job failures

  • Force chef-client run on the other three active (blue or green) shared runner managers

  • Notify #production and #support_gitlab-com that this change is fully rolled out

    Hey team! Rollout of https://gitlab.com/gitlab-com/gl-infra/production/-/issues/7206 is finished.
    
    With this 100% of CI/CD jobs executed on the `shared` runners will use internal networking
    for the initial `git fetch` execution. As mentioned before, the change should be fully
    transparent for the customers and we don't expect it to affect their jobs.
  • Set label changecomplete /label ~change::complete

NO BLUE/GREEN SWITCH IS NEEDED IN THIS CASE!

Rollback

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 30

NO BLUE/GREEN SWITCH IS NEEDED IN THIS CASE!

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Rehab