2026-01-23: Private runners - fallback to green deployment
Production Change
Change Summary
Failover private runners from the blue deployment to the green deployment to test the change in gitlab-com/gl-infra/ci-runners/deployer!57 (merged) and bring all private runners back in sync. runners-manager-private-blue-5 is running version 18.7.0~pre.433.g3a5f2314 while the rest of the fleet is running 18.7.0~pre.1.g2f054230. There is no explicit version set for this runner in roles/runners-manager-private-blue-5.json, and the runners became out of sync due to chef-repo MR #6782.
Change Details
- Services Impacted - Private runners
-
Change Technician -
@rehab - Change Reviewer - @rehab
- Scheduled Date and Time (UTC in format YYYY-MM-DD HH:MM) - 2026-01-23 (TBD)
- Time tracking - 60 minutes
- Downtime Component - none
Important
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Preparation
Note
The following checklists must be done in advance, before setting the label changescheduled
Change Reviewer checklist
-
Check if the following applies:
- The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies:
- The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
- The Change Criticality has been set appropriately and requirements have been reviewed.
- The change plan is technically accurate.
- The rollback plan is technically accurate and detailed enough to be executed by anyone with access.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
-
Once all boxes above are checked, mark the change request as scheduled:
/label ~"change::scheduled" - For C1 and C2 change issues, the change event is added to the GitLab Production calendar by the change-scheduler bot. It is schedule to run every 2 hours.
- For C1 change issues, a Senior Infrastructure Manager has provided approval with the manager_approved label on the issue.
- For C2 change issues, an Infrastructure Manager provided approval with the manager_approved label on the issue.
-
For C1 and C2 changes, mention
@gitlab-org/saas-platforms/inframanagersin this issue to provide visibility to all infrastructure managers. -
For C1, C2, or blocks deployments change issues, confirm with Release managers that the change does not
overlap or hinder any release process (In
#productionchannel, mention@release-managersand this issue and await their acknowledgment.) - For C1 change issues or C2 change issues happening during weekend, SREs on-call must be informed at least 2 weeks in advance. Check the incident.io GitLab.com Production EOC schedule to find who will be on-call at the scheduled day and time.
Detailed steps for the change
Pre-execution steps
Note
The following steps should be done right at the scheduled time of the change request. The preparation steps are listed below.
- Make sure all tasks in Change Technician checklist are done
-
For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out.
- The SRE on-call provided approval with the eoc_approved label on the issue.
-
For C1, C2, or blocks deployments change issues, Release managers have been informed prior to change being rolled out. (In
#productionchannel, mention@release-managersand this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Change steps - steps to take to execute the change
Estimated Time to Complete (mins) - 30
-
Set label changein-progress
/label ~change::in-progress -
Start
private/greenvia#production:/runner run start private green - Wait for first jobs to be taken up by the newly started runner managers
-
Stop
private/bluevia#production:/runner run stop private blue -
Inform EOC that deployment is done; in the original
#productionthread leave the comment:@sre-oncall This is done. New runners are up and running. Old runners are in the draining mode and when they will finish executing last jobs (which may take up to few hours) they will go to sleep. - Monitor runner health and job execution on green deployment
-
Set label changecomplete
/label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 15
-
Start
private/bluevia#production:/runner run start private blue - Wait for first jobs to be taken up by the newly started runner managers
-
Stop
private/greenvia#production:/runner run stop private green -
Set label changeaborted
/label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Private runner job queue depth
- Location: Grafana Dashboard
- What changes to this metric should prompt a rollback: Sustained increase in queue depth or job failures on green deployment
- Metric: Runner health status
- Location: Kibana logs
- What changes to this metric should prompt a rollback: Runners becoming offline or unhealthy on green deployment