The Multus-clean pods are in crashloopbackoff due to OOM

Error seen on a real baremetal cluster

 kube-system                     multus-cleanup-7bwjx                                                 1/1     Running            26 (5m13s ago)   110m    100.72.57.116   mgmt-44723244-rke2-capm3-xxx-xxx-38   <none>           <none>
kube-system                     multus-cleanup-h6sjh                                                 0/1     CrashLoopBackOff   25 (4m28s ago)   110m    100.72.29.26    mgmt-44723244-rke2-capm3-xxx-xxx-33   <none>           <none>
kube-system                     multus-cleanup-w5q58                                                 0/1     CrashLoopBackOff   15 (107s ago)    110m    100.72.75.244   mgmt-44723244-rke2-capm3-xxx-xxx-34   <none>           <none>  
       terminated:
      containerID: containerd://44ee80467075a51b09921eb44acea56fdfae6c4def6be35da4324b1fcfd076f9
      exitCode: 137
      finishedAt: "2025-08-10T22:46:10Z"
      reason: OOMKilled
      startedAt: "2025-08-10T22:46:03Z"

If the error persists after increasing the memory, further investigation would be necessary, as these pods dont perform many tasks that could explain the limit being exceeded.

Assignee Loading
Time tracking Loading