Enable `workhorse_use_sidechannel` for Gitlab's internal repositories
Production Change
Change Summary
For scalability#1193
In gitlab-org/gitlab!71047 (merged), we introduced a new RPC called PostUploadPackWithSidechannel
to replace PostUploadPack
. We added a flag named workhorse_use_sidechannel
to control the roll out. At the moment, the change in the MR was deployed to production smoothly. It's time to turn on the flag to switch to the new RPC call.
The flag was turned on Staging in #5700 (closed). This change issue is to continue rolling out the flag to some of Gitlab's internal major repositories:
- gitlab-org/gitaly
- gitlab-org/gitlab
- gitlab-com/www-gitlab-com
Change Details
- Services Impacted ServiceGitaly ServiceWorkhorse
- Change Technician @qmnguyen0711
- Change Reviewer @jacobvosmaer-gitlab
- Time tracking 1 hour
- Downtime Component - None
Detailed steps for the change
Pre-Change Steps - steps to be completed before execution of the change
Estimated Time to Complete (mins) - 1 min
-
Set label changein-progress on this issue
Change Steps - steps to take to execute the change
Estimated Time to Complete (mins) - 10 mins
-
/chatops run feature set workhorse_use_sidechannel true --project=gitlab-org/gitaly
-
/chatops run feature set workhorse_use_sidechannel true --project=gitlab-org/gitlab
-
/chatops run feature set workhorse_use_sidechannel true --project=gitlab-com/www-gitlab-com
Post-Change Steps - steps to take to verify the change
Estimated Time to Complete (mins) - 10 mins
-
Clone aforementioned repositories in the local environment -
rm -rf /tmp/test.git; git clone --bare --depth=1 https://gitlab.com/gitlab-org/gitaly.git /tmp/test.git
-
rm -rf /tmp/test.git; git clone --bare --depth=1 https://gitlab.com/gitlab-org/gitlab.git /tmp/test.git
-
rm -rf /tmp/test.git; git clone --bare --depth=1 https://gitlab.com/gitlab-com/www-gitlab-com.git /tmp/test.git
-
-
All the calls should use PostUploadPackWithSidechannel. The flag propagation should take a while. This behavior could be observed via grpc client metrics below, or in the gitaly server logs. -
Ensure that the metrics are fine
Rollback
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 30 mins
-
/chatops run feature set workhorse_use_sidechannel false
Monitoring
Key metrics to observe
-
Metric: gRPC client handled
- Location: https://thanos.gitlab.net/graph?g0.expr=sum(rate(grpc_client_handled_total%7Benvironment%3D%22gprd%22%2C%20job%3D%22gitlab-workhorse%22%2C%20grpc_method%3D~%22PostUploadPack.*%22%7D%5B30m%5D))%20by%20(grpc_method%2C%20grpc_code)&g0.tab=1&g0.stacked=0&g0.range_input=6h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D&g0.step_input=600
- What changes to this metric should prompt a rollback: the grpc call rate should shift to PostUploadPackWithSidechannel 100% after the change. The status codes should be all "OK"
-
Metric: Gitaly Server Logs
- Location: https://log.gprd.gitlab.net/goto/8fedbc1e0477cb8e6801bacd1966f04f
- Visualization: https://log.gprd.gitlab.net/goto/6f3c8676f64682090b96bd80d37b153d
- What changes to this metric should prompt a rollback: if something goes wrong with the log
-
Metric: Gitaly server
- Location: https://dashboards.gitlab.net/d/gitaly-main/gitaly-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main
- What changes to this metric should prompt a rollback: if the apdex or SLO of Gitaly drops
-
Metric: Gitaly CPU saturation
- Location: https://thanos.gitlab.net/graph?g0.expr=clamp_max(1%20-%20avg%20by%20(environment%2C%20tier%2C%20type%2C%20stage%2C%20fqdn)%20(%20rate(node_cpu_seconds_total%7Bmode%3D%22idle%22%2C%20env%3D%22gprd%22%2Cenvironment%3D%22gprd%22%2Ctype%3D%22gitaly%22%7D%5B5m%5D))%2C%201)&g0.tab=0&g0.stacked=0&g0.range_input=1h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D
- What changes to this metric should prompt a rollback: if the gitaly server's cpu saturation increases
-
Extra metrics: goroutines, memory and file descriptors for Gitaly, Praefect and Workhorse.
Summary of infrastructure changes
-
Does this change introduce new compute instances? -
Does this change re-size any existing compute instances? -
Does this change introduce any additional usage of tooling like Elastic Search, CDNs, Cloudflare, etc?
Summary of the above
Changes checklist
-
This issue has a criticality label (e.g. C1, C2, C3, C4) and a change-type label (e.g. changeunscheduled, changescheduled) based on the Change Management Criticalities. -
This issue has the change technician as the assignee. -
Pre-Change, Change, Post-Change, and Rollback steps and have been filled out and reviewed. -
This Change Issue is linked to the appropriate Issue and/or Epic -
Necessary approvals have been completed based on the Change Management Workflow. -
Change has been tested in staging and results noted in a comment on this issue. -
A dry-run has been conducted and results noted in a comment on this issue. -
SRE on-call has been informed prior to change being rolled out. (In #production channel, mention @sre-oncall
and this issue and await their acknowledgement.) -
Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention @release-managers
and this issue and await their acknowledgment.) -
There are currently no active incidents.