[GPRD] Deploy redis gem upgrade
Production Change
Change Summary
The change involves merging gitlab-org/gitlab!145203 (merged) and deploying it while tracking it on a change request. This is the 3rd attempt for rolling out the gem upgrade. The first attempt was reverted due to NOAUTH bug caused by gitlab's peek instrumentation resolved in scalability#2826 (closed). The second try reverted due to persistent sentinel connection bug in redis-client resolved in scalability#2867 (closed).
Part of &941 (closed)
Change Details
- Services Impacted - all Redis services + ServiceWeb ServiceAPI ServiceSidekiq ServiceWebsockets
- Change Technician - @schin1
- Change Reviewer - @mayra-cabrera
- Time tracking - 5h
- Downtime Component - 4h
Set Maintenance Mode in GitLab
If your change involves scheduled maintenance, add a step to set and unset maintenance mode per our runbooks. This will make sure SLA calculations adjust for the maintenance period.
Detailed steps for the change
Change Steps - steps to take to execute the change
Estimated Time to Complete (300 mins)
-
[DRI: Release manager] Release manager to merge gitlab-org/gitlab!145203 (merged) -
[DRI: @schin1] Create a revert MR using revert
button and link to this CR -- gitlab-org/gitlab!145911 (closed) -
Set label changein-progress /label ~change::in-progress
-
[DRI: @schin1] Monitor gprd-cny
behaviour and initial changes to Redis servers. Refer to #17640 (closed).- As the deployment will reach staging-canary and staging-ref before gprd-cny, we can monitor sentry, gitlab redis client exception rates for early signs of warning.
- Some issues may not surface due to the lack of scale (e.g. sentinel connection problem only surfaced in gprd).
-
[DRI: release manager] Release manager to promote release to gstg
and then (automatically)gprd
-
[DRI: @schin1] Monitor gprd
behaviour and changes to Redis servers. Refer to #17640 (closed). -
[DRI: EOC] Wait ~1h to observe metrics across Redis servers and web/api/sidekiq services. -
Set label changecomplete /label ~change::complete
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (300 mins)
-
Open an incident issue and block deployments
[DRI: RM] Merge revert MR gitlab-org/gitlab!145911 (closed)
[DRI: EOC] If the deployment is still in gprd-cny,
-
drain canary -
Add the label blocks deployment
to this change request
[DRI: EOC] If the deployment reached production,
-
drain canary -
Request release managers to rollback production
[DRI: RM] After the revert MR has reached gprd-cny
-
Notify the EOC canary will be re-enabled -
Enable canary -
Notify the EOC the deployment to the main stages will the revert will start -
Deploy to staging -
Deploy to production -
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: Incoming TCP connections to Redis nodes
- Location: chart
- What changes to this metric should prompt a rollback: Any sustained increase change in incoming TCP connections would indicate a possible resurgence in scalability#2867 (closed).
- Metric: Redis server primary CPU
- Location: chart
- What changes to this metric should prompt a rollback: any sharp and sustained increase should be further investigated. While this did not happen on the first and second attempt for the gem upgrade, we should have this dashboard ready
- Metric: error rates for redis client in gitlab rails
- Location: chart
- What changes to this metric should prompt a rollback: Any sustained increase in exceptions should be further investigated using sentry and logs.
- Metric: Apdex and error rates for web
- Location: apdex and error rates
- What changes to this metric should prompt a rollback: apdex or error rates hitting the 1h outage threshold or sustains past the 6h degradation threshold
- Metric: Apdex and error rates for api
- Location: apdex and error rates
- What changes to this metric should prompt a rollback: apdex or error rates hitting the 1h outage threshold or sustains past the 6h degradation threshold
- Metric: apdex and error rates for sidekiq
- Location: apdex and error rates
- What changes to this metric should prompt a rollback: apdex or error rates hitting the 1h outage threshold or sustains past the 6h degradation threshold
Sentry: track the release's sentry reports (https://new-sentry.gitlab.net/organizations/gitlab/releases/) for any abnormalities.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The change window has been agreed with Release Managers in advance of the change. If the change is planned for APAC hours, this issue has an agreed pre-change approval.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary.
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
- A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed prior to any C1, C2, or blocks deployments change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.