kyverno policies: split the 'avoid-delete-mgmt-resources' policy into its own unit
We frequently see the following error in logs:
kyverno-policies 60m False ClusterPolicy/avoid-delete-mgmt-resources dry-run failed:
admission webhook "validate-policy.kyverno.svc" denied the request:
path: spec.rules[1].match.any[0].kinds:
the kind defined in the all match resource is invalid:
unable to convert GVK to GVR for kinds cluster.x-k8s.io/*/Cluster, err:
failed to find resource (cluster.x-k8s.io/*/Cluster/)...
It's temporary: the kyverno-policies unit indeed can't become ready until CAPI is installed, because before that the Cluster CRD does not exist yet.
I was until now thinking that it was benign... but from a timing standpoint maybe it is not
Let's consider the following:
- now that we've more than one CAPI controller, we need two nodes before the capi unit can become ready
- this delays how fast the kyverno-policies unit becomes ready
- ... and many units wait on kyverno-policies
This MR is splitting this Kyverno policy out of the kyverno-policies unit, into it's own unit, to avoid the "dependency bottleneck" described above.
The kyverno policies are kept under a common sub-directory of kustomize-units:
kustomize-units/kyverno-policies
├── generic <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< contains most of what we had under
│ │ kustomize-units/kyverno-policies before this MR
│ ├── components
│ │ └── management-cluster-only
│ │ ├── default-namespace.yaml
│ │ └── kustomization.yaml
│ ├── disable-automount-sa.yaml
│ ├── ensure-force-cluster-policy.yaml
│ ├── kustomization.yaml
│ ├── patch-bond-policy.yaml
│ └── tag.yaml
└── protect-mgmt-cluster <<<<<<<<<<<<<<<<<<<<<<<<< the policy separated from the others by this MR
├── kustomization.yaml
└── prevent-deletion.yaml
Edited by Thomas Morin