Increase limit for pods per node (kubelet maxPods)

The question has been raised of whether we could adopt a value higher than the default of 110 for kubelet maxPods setting.

This seems legitimate for baremetal clusters in particular:

  • in terms of processing power a typical baremetal server is less limited than a typical VM, and we frequently build baremetal cluster with less nodes

  • during Cluster API node rolling updates, we delete a node before recreating it, and we hence need to put all the workloads on the remaining nodes; on a 3 CP node cluster we hence have to fit all the workloads on 2 nodes ; even on a 3-CP+1-MD bm cluster we may need to fit everything on 2 nodes (because we can have one node being rebuilt in the CP at the same time as the MD node is being rebuilt):

    • our typical deployments sometimes have around 200 pods
    • this last case 3-CP+1-MD is one that we have, and we hit scheduling failures due to pod limitation ():
 FailedScheduling 0/2 nodes are available: 1 Too many pods, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling..

/cc @rletrocquer

Edited by Thomas Morin