2024-10-02: Rollout Ruby 3.2 to Production

Production Change

Change Summary

Following up from the test rollout in gstg-cny and gprd-cny #18470 (closed), the plan is to rollout Ruby 3.2 all the way to production (gprd-main)

During the change duration, auto-deploys will need to be paused.

Change Details

  1. Services Impacted - GitLab Rails, Sidekiq, and any other service that uses Ruby
  2. Change Technician - @jennykim-gitlab
  3. Change Reviewer - @rpereira2
  4. Time tracking - 480 minutes (8 hours)
  5. Downtime Component - none
  6. Start Time - 14:00 UTC

Set Maintenance Mode in GitLab

If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.

No need for setting Maintenance mode.

Detailed steps for the change

Change Steps - steps to take to execute the change

  • No PDM should be run on the day of the CR
  • Note the last deployment package that was successfully deployed to production in the Rollback section.

Estimated Time to Complete (mins) - 480 minutes (8 hours)

Pause auto deploys

  • Pause auto deploys: /chatops run auto_deploy pause

Prepare a Ruby 3.2 package

Start a deployment pipeline

  • Trigger a deployment pipeline by running the "MANUAL auto-deploy pick&tag" inactive manual scheduled pipeline: https://ops.gitlab.net/gitlab-org/release/tools/-/pipeline_schedules/.
  • Make sure a new auto deploy pipeline has started.
  • Verify that the Omnibus package contains Ruby 3.2.5.
    • Find out the image reference of the Docker image built by Docker job in the Omnibus packager pipeline. You can search for the string Pushed dev.gitlab.org:5005/gitlab/omnibus-gitlab in the logs for that (it is towards the end, under the collapsible section docker-push-staging)
    • Run the following command locally to know the bundled Ruby version - docker run -it dev.gitlab.org:5005/gitlab/charts/components/images/gitlab-webservice-ee:<IMAGE_REFERENCE> -- ruby --version

Deploy the Ruby 3.2 Package

  • Ping monitoring engineers (@stanhu) and @release-managers on #f_ruby3 channel in Slack when package is deployed to gstg-cny.
  • Make sure Quality smoke and reliable pipelines on gstg-cny have passed. If there are failures, ask the Quality on-call to have a look to determine if the failures are related to the Ruby 3.2 rollout.
  • Let deployment continue upto gprd-cny.
  • Ping monitoring engineers (@stanhu) and @release-managers on #f_ruby3 channel in Slack when package is deployed to gprd-cny.
  • Make sure Quality smoke and reliable pipelines on gprd-cny have passed. If there are failures, ask the Quality on-call to have a look to determine if the failures are related to the Ruby 3.2 rollout.
  • Promote the package to gstg once the monitoring engineers (@stanhu) give green light.
  • Keep an eye for any gprd deployment jobs and cancel them.
  • Ping monitoring engineers (@stanhu) and @release-managers on #f_ruby3 channel in Slack when package is deployed to gstg.
Deploy to Production

We will manually deploy to the zonal clusters manually, then the regional cluster. Bake to account for monitoring time in-between each cluster.

Zonal cluster b
  • Manually run gprd-us-east1-b:auto-deploy
  • Ping monitoring engineers (@stanhu) and @release-managers on #f_ruby3 channel in Slack when package is deployed to the zonal cluster gprd-us-east1-b.
  • Bake for 30 minutes, or until engineers give green light.
Zonal cluster c
  • Manually run gprd-us-east1-c:auto-deploy
  • Ping monitoring engineers (@stanhu) and @release-managers on #f_ruby3 channel in Slack when package is deployed to the zonal cluster gprd-us-east1-c.
  • Bake for 30 minutes, or until engineers give green light.
Zonal cluster d
  • Manually run gprd-us-east1-d:auto-deploy
  • Ping monitoring engineers (@stanhu) #f_ruby3 channel in Slack when package is deployed to the zonal cluster gprd-us-east1-d.
  • Bake for 15 minutes, or until engineers give green light.
Regional cluster
  • Manually run gprd:auto-deploy
  • Inform the @sre-on-call and @release-managers in #production on Slack and the monitoring engineers in the #f_ruby3 channel in Slack when the package is deployed to gprd.
Post-deploy

Unpause auto deploys

  • /chatops run auto_deploy unpause
  • Set label changecomplete /label ~change::complete

Rollback

Last Ruby 3.1 package that was successfully deployed to production

17.5.202410021000-6d7a87723b7.afc2f121a22

Rollback steps - steps to be taken in the event of a need to rollback this change

Estimated Time to Complete (mins) - 60 minutes

Rollback production-canary only

If you have not promoted to production and need to rollback production-canary, follow the following steps:

Rollback production and staging

If we need to rollback production and staging, follow the steps in https://gitlab.com/gitlab-org/release/docs/-/blob/master/runbooks/rollback-a-deployment.md to rollback to a Ruby 3.1 package. The steps are reproduced here as well:

  • /chatops run rollback check gprd
  • Notify @sre-on-call, @release-managers in #production that a rollback is about to be started. Make sure they know that Canary will also be drained.
  • /chatops run canary --disable --production
  • /chatops run deploy <PACKAGE NAME> gprd --rollback
  • /chatops run rollback check gstg
  • Notify @sre-on-call, @release-managers in #staging that a rollback is about to be started. Make sure they know that Canary will also be drained.
  • /chatops run canary --disable --staging
  • /chatops run deploy <PACKAGE NAME> gstg --rollback

Make sure that the next auto deploy package will be built with Ruby 3.1

Monitoring

Key metrics to observe

Change Reviewer checklist

C4 C3 C2 C1:

  • Check if the following applies:
    • The scheduled day and time of execution of the change is appropriate.
    • The change plan is technically accurate.
    • The change plan includes estimated timing values based on previous testing.
    • The change plan includes a viable rollback plan.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.

C2 C1:

  • Check if the following applies:
    • The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
    • The change plan includes success measures for all steps/milestones during the execution.
    • The change adequately minimizes risk within the environment/service.
    • The performance implications of executing the change are well-understood and documented.
    • The specified metrics/monitoring dashboards provide sufficient visibility for the change.
      • If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
    • The change has a primary and secondary SRE with knowledge of the details available during the change window.
    • The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
    • The labels blocks deployments and/or blocks feature-flags are applied as necessary.

Change Technician checklist

  • Check if all items below are complete:
    • The change plan is technically accurate.
    • This Change Issue is linked to the appropriate Issue and/or Epic
    • Change has been tested in staging and results noted in a comment on this issue.
    • A dry-run has been conducted and results noted in a comment on this issue.
    • The change execution window respects the Production Change Lock periods.
    • For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
    • For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall and this issue and await their acknowledgement.)
    • For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
    • For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
    • Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention @release-managers and this issue and await their acknowledgment.)
    • There are currently no active incidents that are severity1 or severity2
    • If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.
Edited by Jenny Kim