2024-10-02: Rollout Ruby 3.2 to Production
Production Change
Change Summary
Following up from the test rollout in gstg-cny
and gprd-cny
#18470 (closed), the plan is to rollout Ruby 3.2 all the way to production (gprd-main
)
During the change duration, auto-deploys will need to be paused.
Change Details
- Services Impacted - GitLab Rails, Sidekiq, and any other service that uses Ruby
- Change Technician - @jennykim-gitlab
- Change Reviewer - @rpereira2
- Time tracking - 480 minutes (8 hours)
- Downtime Component - none
- Start Time - 14:00 UTC
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and per our runbooks. This will make sure unset maintenance mode for the maintenance period.SLA calculations adjust
No need for setting Maintenance mode.
Detailed steps for the change
Change Steps - steps to take to execute the change
-
No PDM should be run on the day of the CR -
Note the last deployment package that was successfully deployed to production in the Rollback section.
Estimated Time to Complete (mins) - 480 minutes (8 hours)
-
Set label changein-progress /label ~change::in-progress
Pause auto deploys
-
Pause auto deploys: /chatops run auto_deploy pause
Prepare a Ruby 3.2 package
-
Inform the @sre-on-call
and@release-manager
in#production
, and the engineers in the#f_ruby3
channel in Slack that we are starting building a Ruby 3.2 package to deploy to production. -
Set USE_NEXT_RUBY_VERSION_IN_AUTODEPLOY
totrue
in the following projects: -
Merge MR to update README with new ruby version: gitlab-org/gitlab!167919 (merged)
Start a deployment pipeline
-
Trigger a deployment pipeline by running the "MANUAL auto-deploy pick&tag" inactive manual scheduled pipeline: https://ops.gitlab.net/gitlab-org/release/tools/-/pipeline_schedules/. -
Make sure a new auto deploy pipeline has started. - Note the new auto-deploy pipeline: https://ops.gitlab.net/gitlab-org/release/tools/-/pipelines/3753273
-
Verify that the Omnibus package contains Ruby 3.2.5. -
Find out the image reference of the Docker image built by Docker job in the Omnibus packager pipeline. You can search for the string Pushed dev.gitlab.org:5005/gitlab/omnibus-gitlab
in the logs for that (it is towards the end, under the collapsible sectiondocker-push-staging
) -
Run the following command locally to know the bundled Ruby version - docker run -it dev.gitlab.org:5005/gitlab/charts/components/images/gitlab-webservice-ee:<IMAGE_REFERENCE> -- ruby --version
-
Deploy the Ruby 3.2 Package
-
Ping monitoring engineers ( @stanhu
) and@release-managers
on#f_ruby3
channel in Slack when package is deployed togstg-cny
.-
Check monitoring
-
-
Make sure Quality smoke and reliable pipelines on gstg-cny have passed. If there are failures, ask the Quality on-call to have a look to determine if the failures are related to the Ruby 3.2 rollout. -
Let deployment continue upto gprd-cny
. -
Ping monitoring engineers ( @stanhu
) and@release-managers
on#f_ruby3
channel in Slack when package is deployed togprd-cny
.-
Check monitoring
-
-
Make sure Quality smoke and reliable pipelines on gprd-cny have passed. If there are failures, ask the Quality on-call to have a look to determine if the failures are related to the Ruby 3.2 rollout. -
Promote the package to gstg
once the monitoring engineers (@stanhu
) give green light. -
Keep an eye for any gprd
deployment jobs and cancel them. -
Ping monitoring engineers ( @stanhu
) and@release-managers
on#f_ruby3
channel in Slack when package is deployed togstg
.-
Check monitoring
-
Deploy to Production
We will manually deploy to the zonal clusters manually, then the regional cluster. Bake to account for monitoring time in-between each cluster.
-
Set MANUAL_GPRD_DEPLOY
totrue
in https://ops.gitlab.net/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/settings/ci_cd -
Cancel notify_success:gprd
job to not accidentally announce a successful deploy during baking time of manual jobs -
Restart the previously cancelled gprd
deployment job(s) to start deployment to production
Zonal cluster b
-
Manually run gprd-us-east1-b:auto-deploy
-
Ping monitoring engineers ( @stanhu
) and@release-managers
on#f_ruby3
channel in Slack when package is deployed to the zonal clustergprd-us-east1-b
.-
Check monitoring. Remember to set the zone to b
-
-
Bake for 30 minutes, or until engineers give green light.
Zonal cluster c
-
Manually run gprd-us-east1-c:auto-deploy
-
Ping monitoring engineers ( @stanhu
) and@release-managers
on#f_ruby3
channel in Slack when package is deployed to the zonal clustergprd-us-east1-c
.-
Check monitoring. Remember to set the zone to c
-
-
Bake for 30 minutes, or until engineers give green light.
Zonal cluster d
-
Manually run gprd-us-east1-d:auto-deploy
-
Ping monitoring engineers ( @stanhu
)#f_ruby3
channel in Slack when package is deployed to the zonal clustergprd-us-east1-d
.-
Check monitoring. Remember to set the zone to d
-
-
Bake for 15 minutes, or until engineers give green light.
Regional cluster
-
Manually run gprd:auto-deploy
-
Inform the @sre-on-call
and@release-managers
in#production
on Slack and the monitoring engineers in the#f_ruby3
channel in Slack when the package is deployed togprd
.-
Check monitoring
-
Post-deploy
-
Restart the previously cancelled notify_success:gprd
job to notify the successful deployment togprd
on Slack#announcements
channel -
Remove MANUAL_GPRD_DEPLOY
in https://ops.gitlab.net/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/settings/ci_cd -
Do not execute post-deploy migrations for the rest of the day (EMEA and AMER) beyond this point.
Unpause auto deploys
-
/chatops run auto_deploy unpause
-
Set label changecomplete /label ~change::complete
Rollback
Last Ruby 3.1 package that was successfully deployed to production
17.5.202410021000-6d7a87723b7.afc2f121a22
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 60 minutes
Rollback production-canary only
If you have not promoted to production and need to rollback production-canary, follow the following steps:
-
Notify @sre-on-call
,@release-managers
in#production
that Production Canary will be drained. -
/chatops run canary --disable --production
-
Follow the steps in Make sure that the next auto deploy package will be built with Ruby 3.1
Rollback production and staging
If we need to rollback production and staging, follow the steps in https://gitlab.com/gitlab-org/release/docs/-/blob/master/runbooks/rollback-a-deployment.md to rollback to a Ruby 3.1 package. The steps are reproduced here as well:
-
/chatops run rollback check gprd
-
Notify @sre-on-call
,@release-managers
in#production
that a rollback is about to be started. Make sure they know that Canary will also be drained. -
/chatops run canary --disable --production
-
/chatops run deploy <PACKAGE NAME> gprd --rollback
-
/chatops run rollback check gstg
-
Notify @sre-on-call
,@release-managers
in#staging
that a rollback is about to be started. Make sure they know that Canary will also be drained. -
/chatops run canary --disable --staging
-
/chatops run deploy <PACKAGE NAME> gstg --rollback
Make sure that the next auto deploy package will be built with Ruby 3.1
-
Set USE_NEXT_RUBY_VERSION_IN_AUTODEPLOY
tofalse
in https://dev.gitlab.org/gitlab/omnibus-gitlab/-/settings/ci_cd. -
Set USE_NEXT_RUBY_VERSION_IN_AUTODEPLOY
tofalse
in https://dev.gitlab.org/gitlab/charts/components/images/-/settings/ci_cd. -
If you had already unpaused auto-deploys, cancel any auto-deploy pipelines whose packages were built before you changed the USE_NEXT_RUBY_VERSION_IN_AUTODEPLOY
variable tofalse
. -
Revert MR to update README: gitlab-org/gitlab!167919 (merged) -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Dashboards/metrics:
- Monitor the following dashboards for unhealthy dip in service health for the environment/cluster that is being rolled out.
- Deployment health, configurable with environment, stage, and type/service
- Kubernetes compute resource/cluster health, configurable with clusters
- Kubernetes compute resource/pods health, configurable with clusters and namespace
- Kubernetes networking, configurable with clusters
- Per-service dashboards (change
env
andstage
to toggle betweengstg
/gprd
andmain
/cny
):-
api
(overview, containers) -
web
(overview, containers) -
websockets
(overview, containers) -
git
(overview, containers) -
sidekiq
(overview, containers)
-
- Kibana - Puma (edit
json.type
to filter by service,json.stage
forcny
vsmain
) - Kibana - Sidekiq (edit
json.shard
to switch between job types) - Sentry
- QA runs can be observed via Slack:
-
#announcements
- Besides QA messages, multiple messages are sent to this channel to account for the different deployments. - QA slack channels - There is a channel per environment, for example, a failure on gstg and gstg-cny will be posted in
#qa-staging
, a failure on gprd-cny and gprd will be posted in#qa-production
, etc.
-
- Dealing with deploy failures: https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/deploy/failures.md
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.