multus-cleanup scheduled in loop when node is under disk-pressure

Summary

When a node suffers from disk pressure, the multus-cleanup daemonset is scheduled and evicted ad nauseam.

it looks like the pods from the mutlus-cleanup daemonset are being chain evicted for the following reasons:

They have a limit/request for ephemeral storage The daemonset controller gives them the node.kubernetes.io/disk-pressure:NoSchedule toleration

This cause the pod to be evicted since it's requesting ephemeral storage, and then to to immediately re-scheduled since it ignores disk pressure and so on.

Related to #3232 (closed)

Assignee Loading
Time tracking Loading