Leave some room for other processes in unicorn calculation
Currently our calculation for determining how many unicorn workers is this:
default['gitlab']['unicorn']['worker_processes'] = [
2, # Two is the minimum or HTTP(S) Git pushes will no longer work.
[
# Cores + 1 gives good CPU utilization.
node['cpu']['total'].to_i + 1,
# See how many 300MB worker processes fit in (total RAM - 1GB). We add
# 128000 KB in the numerator to get rounding instead of integer truncation.
(node['memory']['total'].to_i - 1048576 + 128000) / 358400
].min # min because we want to exceed neither CPU nor RAM
].max # max because we need at least 2 workers
On a 2-core system with 2GB ram, I see that there are 4 processes in total. In this configuration, that generally leaves no headroom for processes to run (e.g. gitlab-shell, du, etc.). I see lots of out-of-memory errors in Sidekiq as a result.
I have a few suggestions:
- Get rid of cores + 1 calculation--it should be driven by memory alone
- Leave 200-300 MB of headroom for some other processes to run
Thoughts?
/cc: @dblessing, @jacobvosmaer-gitlab, @marin