Extend available ephemeral storage per node

Currently the in : sylva-growparts, the ephemeral storage is set to lv_kubelet = 20%VG (and VG= 90% node disk - 5GB)

lvextend /dev/vg/lv_var -l 5%VG
lvextend /dev/vg/lv_home -l 5%VG
lvextend /dev/vg/lv_tmp -l 5%VG
lvextend /dev/vg/lv_vartmp -l 5%VG
lvextend /dev/vg/lv_varlog -l 5%VG
lvextend /dev/vg/lv_varlogaudit -l 5%VG
lvextend /dev/vg/lv_etcd -l 10%VG
lvextend /dev/vg/lv_containerd -l 30%VG
lvextend /dev/vg/lv_kubelet -l 20%VG

This setting is applied to every node including the workload-cluster machine-deployment, meaning that with a md VM with 60GiB of root disk you get only less than 10GiB of available ephemeral storage.

Looking at the effective consumption on some nodes, it appears that we could reduce /tmp, /var/tmp, /var/log/audit, /home to make the /var/lib/kubelet (thus the ephemeral storage) grow to 30% of VG. With the following setting, we can increase from 50% the ephemeral storage capacity:

lvextend /dev/vg/lv_var -l 5%VG
lvextend /dev/vg/lv_home -l 4%VG
lvextend /dev/vg/lv_tmp -l 2%VG
lvextend /dev/vg/lv_vartmp -l 2%VG
lvextend /dev/vg/lv_varlog -l 5%VG
lvextend /dev/vg/lv_varlogaudit -l 2%VG
lvextend /dev/vg/lv_etcd -l 10%VG
lvextend /dev/vg/lv_containerd -l 30%VG
lvextend /dev/vg/lv_kubelet -l 30%VG

Another way (probably much more complex) could be get a different setting depending on the node role to not waste capacity for ETCD on machine-deployment nodes.

Current effective storage consumption examples:

md node example on a workload-cluster:

Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg-lv_root         4.6G  2.0G  2.3G  47% /
devtmpfs                       4.0M  8.0K  4.0M   1% /dev
tmpfs                          3.9G     0  3.9G   0% /dev/shm
tmpfs                          1.6G   22M  1.6G   2% /run
/dev/mapper/vg-lv_tmp          3.0G   40K  2.8G   1% /tmp
/dev/mapper/vg-lv_home         3.0G   80K  2.8G   1% /home
/dev/mapper/vg-lv_opt          1.8G  1.1G  629M  64% /opt
/dev/mapper/vg-lv_var          4.6G  1.1G  3.2G  26% /var
/dev/mapper/vg-lv_kubelet      9.6G   61M  9.1G   1% /var/lib/kubelet
/dev/mapper/vg-lv_containerd    18G  9.8G  7.2G  58% /var/lib/rancher/rke2/agent/containerd
/dev/mapper/vg-lv_etcd         5.9G   24K  5.7G   1% /var/lib/rancher/rke2/server/db
/dev/mapper/vg-lv_varlog       3.0G   35M  2.8G   2% /var/log
/dev/mapper/vg-lv_vartmp       3.0G   40K  2.8G   1% /var/tmp
/dev/mapper/vg-lv_varlogaudit  3.0G  664K  2.8G   1% /var/log/audit
/dev/vda2                      250M  4.0M  246M   2% /boot/efi

cp node example of a management cluster:

Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg-lv_root         4.6G  2.0G  2.3G  47% /
devtmpfs                       4.0M     0  4.0M   0% /dev
tmpfs                           16G     0   16G   0% /dev/shm
tmpfs                          6.3G   12M  6.3G   1% /run
/dev/mapper/vg-lv_home         4.9G   72K  4.7G   1% /home
/dev/mapper/vg-lv_opt          1.8G  967M  760M  56% /opt
/dev/mapper/vg-lv_tmp          4.9G   40K  4.7G   1% /tmp
/dev/mapper/vg-lv_var          4.9G  1.1G  3.5G  24% /var
/dev/mapper/vg-lv_kubelet       20G   14G  5.4G  72% /var/lib/kubelet
/dev/mapper/vg-lv_containerd    30G   23G  5.8G  80% /var/lib/rancher/rke2/agent/containerd
/dev/mapper/vg-lv_etcd         9.9G  2.4G  7.1G  25% /var/lib/rancher/rke2/server/db
/dev/mapper/vg-lv_varlog       4.9G  636M  4.1G  14% /var/log
/dev/mapper/vg-lv_vartmp       4.9G   40K  4.7G   1% /var/tmp
/dev/mapper/vg-lv_varlogaudit  4.9G  2.7M  4.7G   1% /var/log/audit
/dev/vda2                      250M  4.0M  246M   2% /boot/efi
Assignee Loading
Time tracking Loading