Skip to content
Snippets Groups Projects
Commit d78aa8e8 authored by Evan Read's avatar Evan Read
Browse files

Merge branch 'jc-remove-cgroups-from-docs' into 'master'

gitaly docs: Remove cgroups

See merge request !87910
parents 3dc26090 6d739e98
No related branches found
No related tags found
1 merge request!87910gitaly docs: Remove cgroups
Pipeline #542584649 passed
......@@ -823,70 +823,6 @@ repository. In the example above:
You can observe the behavior of this queue using the Gitaly logs and Prometheus. For more
information, see the [relevant documentation](monitoring.md#monitor-gitaly-concurrency-limiting).
## Control groups
> - Introduced in GitLab 13.10.
Gitaly shells out to Git for many of its operations. Git can consume a lot of resources for certain operations,
especially for large repositories.
Control groups (cgroups) in Linux allow limits to be imposed on how much memory and CPU can be consumed.
See the [`cgroups` Linux man page](https://man7.org/linux/man-pages/man7/cgroups.7.html) for more information.
cgroups can be useful for protecting the system against resource exhaustion because of overconsumption of memory and CPU.
Gitaly has built-in cgroups control. When configured, Gitaly assigns Git
processes to a cgroup based on the repository the Git command is operating in.
Each cgroup has a memory and CPU limit. When a cgroup reaches its:
- Memory limit, the kernel looks through the processes for a candidate to kill.
- CPU limit, processes are not killed, but the processes are prevented from consuming more CPU than allowed.
The main reason to configure cgroups for your GitLab installation is that it
protects against system resource starvation due to a few large repositories or
bad actors.
Some Git operations are expensive by nature. `git clone`, for instance,
spawns a `git-upload-pack` process on the server that can consume a lot of memory
for large repositories. For example, a client that keeps on cloning a
large repository over and over again. This situation could potentially use up all of the
memory on a server, causing other operations to fail for other users.
There are many ways someone can create a repository that can consume large amounts of memory when cloned or downloaded.
Using cgroups allows the kernel to kill these operations before they hog up all system resources.
### Configure cgroups in Gitaly
To configure cgroups in Gitaly, add `gitaly['cgroups']` to `/etc/gitlab/gitlab.rb`. For
example:
```ruby
# in /etc/gitlab/gitlab.rb
gitaly['cgroups_count'] = 1000
gitaly['cgroups_mountpoint'] = "/sys/fs/cgroup"
gitaly['cgroups_hierarchy_root'] = "gitaly"
gitaly['cgroups_memory_limit'] = 32212254720
gitaly['cgroups_memory_enabled'] = true
gitaly['cgroups_cpu_shares'] = 1024
gitaly['cgroups_cpu_enabled'] = true
```
- `cgroups_count` is the number of cgroups created. Each time a new
command is spawned, Gitaly assigns it to one of these cgroups based
on the command line arguments of the command. A circular hashing algorithm assigns
commands to these cgroups.
- `cgroups_mountpoint` is where the parent cgroup directory is mounted. Defaults to `/sys/fs/cgroup`.
- `cgroups_hierarchy_root` is the parent cgroup under which Gitaly creates groups, and
is expected to be owned by the user and group Gitaly runs as. Omnibus GitLab
creates the set of directories `mountpoint/<cpu|memory>/hierarchy_root`
when Gitaly starts.
- `cgroups_memory_enabled` enables or disables the memory limit on cgroups.
- `cgroups_memory_bytes` is the total memory limit each cgroup imposes on the processes added to it.
- `cgroups_cpu_enabled` enables or disables the CPU limit on cgroups.
- `cgroups_cpu_shares` is the CPU limit each cgroup imposes on the processes added to it. The maximum is 1024 shares,
which represents 100% of CPU.
which represents 100% of CPU.
## Background Repository Optimization
Empty directories and unneeded configuration settings may accumulate in a repository and
......
......@@ -44,17 +44,6 @@ the Gitaly logs and Prometheus:
- `gitaly_concurrency_limiting_acquiring_seconds` indicates how long a request has to
wait due to concurrency limits before being processed.
## Monitor Gitaly cgroups
You can observe the status of [control groups (cgroups)](configure_gitaly.md#control-groups) using Prometheus:
- `gitaly_cgroups_memory_failed_total`, a gauge for the total number of times
the memory limit has been hit. This number resets each time a server is
restarted.
- `gitaly_cgroups_cpu_usage`, a gauge that measures CPU usage per cgroup.
- `gitaly_cgroup_procs_total`, a gauge that measures the total number of
processes Gitaly has spawned under the control of cgroups.
## `pack-objects` cache
The following [`pack-objects` cache](configure_gitaly.md#pack-objects-cache) metrics are available:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment