The new anti-affinity to MinIO doesn't work on envs with less than 6 nodes
Summary
After the merge of !2723 (merged), 2/4 monitoring-pool-monitoring-X pods cannot be scheduled anymore, due to insufficient nodes. This occurs because monitoring-pool-0-X & monitoring-pool-monitoring-X pods have the same pod anti-affinity, so we would require 6 nodes to host them.
I can see this situation on an environment with 3 cp and 1 md nodes.
# kubectl get pods -n minio-monitoring-tenant -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
monitoring-pool-0-0 2/2 Running 0 18h 100.72.112.31 management-cluster-cp-313965b3b7-k9gdt <none> <none>
monitoring-pool-0-1 2/2 Running 0 18h 100.72.116.29 management-cluster-cp-313965b3b7-bm8zx <none> <none>
monitoring-pool-monitoring-0 2/2 Running 0 18h 100.72.122.247 management-cluster-md0-5m7qr-hcs5p <none> <none>
monitoring-pool-monitoring-1 0/2 Pending 0 18h <none> <none> <none> <none>
monitoring-pool-monitoring-2 2/2 Running 0 18h 100.72.169.85 management-cluster-cp-313965b3b7-sdjv8 <none> <none>
monitoring-pool-monitoring-3 0/2 Pending 0 18h <none> <none> <none> <none>
# kubectl get pods monitoring-pool-monitoring-1 -n minio-monitoring-tenant -o yaml
...
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- minio
topologyKey: kubernetes.io/hostname
...
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-09-03T11:51:41Z"
message: '0/4 nodes are available: 4 node(s) didn''t match pod anti-affinity rules.
preemption: 0/4 nodes are available: 4 No preemption victims found for incoming
pod..'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
related references
Edited by Thomas Morin