Enable `workhorse_use_sidechannel` globally on staging
Production Change
Change Summary
For scalability#1193
In gitlab-org/gitlab!71047 (merged), we introduced a new RPC called PostUploadPackWithSidechannel
to replace PostUploadPack
. We added a flag named workhorse_use_sidechannel
to control the roll out. At the moment, the change in the MR was deployed to production smoothly. It's time to turn on the flag to switch to the new RPC call.
Change Details
- Services Impacted ServiceGitaly ServiceWorkhorse
- Change Technician @qmnguyen0711
- Change Reviewer @qmnguyen0711
- Time tracking A few minutes
- Downtime Component - None
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Estimated Time to Complete (mins) - 1 min
-
Set label changein-progress on this issue
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 10 mins
/chatops run feature set workhorse_use_sidechannel true --staging
Post-Change Steps - steps to take to verify the change
Estimated Time to Complete (mins) - 10 mins
-
Randomly clone some repositories on Staging. All of the clone should work well. -
All the calls should -
Ensure that the metrics are fine
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 30 mins
-
/chatops run feature set workhorse_use_sidechannel false --staging
Monitoring
Key metrics to observe
-
Metric: Workhorse client
- Location: https://thanos.gitlab.net/graph?g0.expr=sum(rate(grpc_client_handled_total%7Benvironment%3D%22gstg%22%2C%20job%3D%22gitlab-workhorse%22%2C%20grpc_method%3D~%22PostUploadPack.*%22%7D%5B30m%5D))%20by%20(grpc_method%2C%20grpc_code)&g0.tab=1&g0.stacked=0&g0.range_input=6h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D&g0.step_input=600
- What changes to this metric should prompt a rollback: the grpc call rate should shift to PostUploadPackWithSidechannel 100% after the change. The status codes should be all "OK"
-
Metric: Logs
- Location: https://nonprod-log.gitlab.net/goto/3cff9e5c83209187dfb6c98e7e9c9a30
- What changes to this metric should prompt a rollback: if something goes wrong with the log
-
Metric: Gitaly server
- Location: https://dashboards.gitlab.net/d/gitaly-main/gitaly-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gstg&var-stage=main
- What changes to this metric should prompt a rollback: if the apdex or SLO of Gitaly drops
-
Metric: Gitaly CPU saturation
- Location: https://thanos.gitlab.net/graph?g0.expr=clamp_max(1%20-%20avg%20by%20(environment%2C%20tier%2C%20type%2C%20stage%2C%20fqdn)%20(%20rate(node_cpu_seconds_total%7Bmode%3D%22idle%22%2C%20env%3D%22gstg%22%2Cenvironment%3D%22gstg%22%2Ctype%3D%22gitaly%22%7D%5B5m%5D))%2C%201)&g0.tab=0&g0.stacked=0&g0.range_input=1h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D
- What changes to this metric should prompt a rollback: if the gitaly server's cpu saturation increases
Summary of infrastructure changes
-
Does this change introduce new compute instances? -
Does this change re-size any existing compute instances? -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc?
Summary of the above
Changes checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall
and this issue and await their acknowledgement.) -
Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers
and this issue and await their acknowledgment.) -
There are currently no active incidents.
Edited by Quang-Minh Nguyen