Enable Gitaly cgroups in production
Production Change
Change Summary
This change enables the feature flag gitaly_run_cmds_in_cgroup
. When enabled, Gitaly will start using cgroups to provide per-repo limits on CPU and memory usage.
Background
Prior to running this change issue, Gitaly will be configured to setup cgroups during startup. These cgroups constrain the CPU and memory resources collectively used by the processes assigned into those cgroups.
When the feature flag gitaly_run_cmds_in_cgroup
is enabled, Gitaly will start assigning each git
command that it spawns to one of these cgroups.
For each git reposiroty, all of the git
commands associated with that repo will run inside the same cgroup. Each repo will tend to use a different cgroup for its commands, providing approximate per-repo resource limits.
This isolation model aims to prevent any one project from starving all other colocated projects for CPU or memory. This provides better availability and responsiveness to our users -- protecting projects from most of the effects of a noisy neighbor.
More details about the design of cgroups can be found here
We've enabled this in staging in #7680 (closed)
Change Details
- Services Impacted - ServiceGitaly
- Change Technician - @msmiley
- Change Reviewer - @steveazz
- Time tracking - 96 hours (mostly pause time between steps for observation)
- Downtime Component - None
Detailed steps for the change
Change Steps - steps to take to execute the change
-
Check that https://gitlab.com/gitlab-com/gl-infra/chef-repo/-/merge_requests/2320 is merged -
Set label changein-progress /label ~change::in-progress
2022-09-19 11:00: Percentage enabled: 1%
Estimated Time to Complete (mins) - 30 minutes
Change:
-
/chatops run feature set gitaly_run_cmds_in_cgroup --random 1
Validation:
-
Validate that we see pids inside of the new CPU/Memory cgroup
:ps -o pid= --ppid $( pidof gitaly ) | xargs -i cat /proc/{}/cgroup 2> /dev/null | awk -F: '$2 ~ /cpu,cpuacct|memory/ { print $2, $3 }' | sort -V | uniq -c
-
Check that no OOM kills are happening: thanos -
Look at cgroup CPU usage: thanos -
Look at cgroup memory usage: thanos -
Wait for long enough to see a clear the CPU and memory usage trends settle before increasing the percentage. -
Set label changescheduled /label ~"change::scheduled"
2022-09-19 12:00: Percentage enabled: 10%
Estimated Time to Complete (mins) - 30 minutes
Change:
-
Set label changein-progress /label ~change::in-progress
-
/chatops run feature set gitaly_run_cmds_in_cgroup --random 10
Validation:
-
Validate that we see pids inside of the new CPU/Memory cgroup
:ps -o pid= --ppid $( pidof gitaly ) | xargs -i cat /proc/{}/cgroup 2> /dev/null | awk -F: '$2 ~ /cpu,cpuacct|memory/ { print $2, $3 }' | sort -V | uniq -c
-
Check that no OOM kills are happening: thanos -
Look at cgroup CPU usage: thanos -
Look at cgroup memory usage: thanos -
Wait for long enough to see a clear the CPU and memory usage trends settle before increasing the percentage. -
Set label changescheduled /label ~"change::scheduled"
2022-09-20 07:00: Percentage enabled: 50%
Estimated Time to Complete (mins) - 60 minutes
Change:
-
Set label changein-progress /label ~change::in-progress
-
/chatops run feature set gitaly_run_cmds_in_cgroup --random 50
Validation:
-
Validate that we see pids inside of the new CPU/Memory cgroup
:ps -o pid= --ppid $( pidof gitaly ) | xargs -i cat /proc/{}/cgroup 2> /dev/null | awk -F: '$2 ~ /cpu,cpuacct|memory/ { print $2, $3 }' | sort -V | uniq -c
-
Check that no OOM kills are happening: thanos -
Look at cgroup CPU usage: thanos -
Look at cgroup memory usage: thanos -
Wait for long enough to see a clear the CPU and memory usage trends settle before increasing the percentage. -
Set label changescheduled /label ~"change::scheduled"
2022-09-22 07:00: Percentage enabled: 100%
Estimated Time to Complete (mins) - 60 minutes
Change:
-
Set label changein-progress /label ~change::in-progress
-
/chatops run feature set gitaly_run_cmds_in_cgroup true
Validation:
-
Validate that we see pids inside of the new CPU/Memory cgroup
:ps -o pid= --ppid $( pidof gitaly ) | xargs -i cat /proc/{}/cgroup 2> /dev/null | awk -F: '$2 ~ /cpu,cpuacct|memory/ { print $2, $3 }' | sort -V | uniq -c
-
Check that no OOM kills are happening: thanos -
Look at cgroup CPU usage: thanos -
Look at cgroup memory usage: thanos -
Wait for long enough to see a clear the CPU and memory usage trends settle before marking the change as complete. -
Set label changescheduled /label ~"change::scheduled"
Completion:
-
Set label changecomplete /label ~change::complete
We will review the metrics in the days following this change.
Rollback
Note: It is safe to disable this feature flag at any time. The flag gets checked each time Gitaly spawns a new git
process, so within seconds of changing the flag, all new processes will revert to running in the original systemd-managed cgroup instead of in a Gitaly-managed per-repo cgroup. We are not shrinking the original cgroup, to ensure that rollback is quick and simple.
Rollback steps - steps to be taken in the event of a need to rollback this change
Estimated Time to Complete (mins) - 1 minute
-
/chatops run feature set gitaly_run_cmds_in_cgroup false
-
Validate that new git processes go back to running in the old systemd-managed cgroup rather than running under the /gitaly
cgroup hierarchy:ps -o pid= --ppid $( pidof gitaly ) | xargs -i cat /proc/{}/cgroup 2> /dev/null | awk -F: '$2 ~ /cpu,cpuacct|memory/ { print $2, $3 }' | sort -V | uniq -c
-
Set label changeaborted /label ~change::aborted
Monitoring
Key metrics to observe
- Metric: gprd node apdex
- Location:
- canary stage: https://dashboards.gitlab.net/d/gitaly-main/gitaly-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=cny&viewPanel=57
- main stage: https://dashboards.gitlab.net/d/gitaly-main/gitaly-overview?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&viewPanel=57
- What changes to this metric should prompt a rollback: Any drop in apdex
- Location:
- Metric: OOM kills
- Location: https://thanos.gitlab.net/graph?g0.expr=increase(node_vmstat_oom_kill%7Benv%3D%22gprd%22%2C%20type%3D%22gitaly%22%7D%5B5m%5D)&g0.tab=0&g0.stacked=0&g0.range_input=1h&g0.max_source_resolution=0s&g0.deduplicate=1&g0.partial_response=0&g0.store_matches=%5B%5D
- What changes to this metric should prompt a rollback: Increase in OOM kills
- Metric: Cgroups rows on the Gitaly Host Details dashboard
- Metric: p95 request latency for gRPCs affecting apdex
- Location: thanos query
- What changes to this metric should prompt a rollback: Increase in latency
- Metric: p99 request latency for gRPCs affecting apdex
- Location: thanos query
- What changes to this metric should prompt a rollback: Increase in latency
- Metric: Host stats for Prometheus Servers
- Location: prometheus-01, prometheus-02
- What changes to this should prompt a rollback: High increase in CPU or anonymous memory usage on the prometheus servers may be due to high cardinality of the cgroups metrics now being scraped. If this was going to be a problem, it is likely to have presented symptoms when the cgroups started being created (prior to this change issue where we start to use them). But double-check the prometheus servers for resource saturation, just in case.
Change Reviewer checklist
-
Check if the following applies: - The scheduled day and time of execution of the change is appropriate.
- The change plan is technically accurate.
- The change plan includes estimated timing values based on previous testing.
- The change plan includes a viable rollback plan.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
-
Check if the following applies: - The complexity of the plan is appropriate for the corresponding risk of the change. (i.e. the plan contains clear details).
- The change plan includes success measures for all steps/milestones during the execution.
- The change adequately minimizes risk within the environment/service.
- The performance implications of executing the change are well-understood and documented.
- The specified metrics/monitoring dashboards provide sufficient visibility for the change.
- If not, is it possible (or necessary) to make changes to observability platforms for added visibility?
- The change has a primary and secondary SRE with knowledge of the details available during the change window.
- The labels blocks deployments and/or blocks feature-flags are applied as necessary
Change Technician checklist
-
Check if all items below are complete: - The change plan is technically accurate.
- This Change Issue is linked to the appropriate Issue and/or Epic
- Change has been tested in staging and results noted in a comment on this issue.
👉 #7680 (closed) - A dry-run has been conducted and results noted in a comment on this issue.
- The change execution window respects the Production Change Lock periods.
- For C1 and C2 change issues, the change event is added to the GitLab Production calendar.
- For C1 and C2 change issues, the SRE on-call has been informed prior to change being rolled out. (In #production channel, mention
@sre-oncall
and this issue and await their acknowledgement.) - For C1 and C2 change issues, the SRE on-call provided approval with the eoc_approved label on the issue.
- For C1 and C2 change issues, the Infrastructure Manager provided approval with the manager_approved label on the issue.
- Release managers have been informed (If needed! Cases include DB change) prior to change being rolled out. (In #production channel, mention
@release-managers
and this issue and await their acknowledgment.) - There are currently no active incidents that are severity1 or severity2
- If the change involves doing maintenance on a database host, an appropriate silence targeting the host(s) should be added for the duration of the change.