capm3 1.8 / delete-workload-cluster failing because metal3datatemplate resources remain
(I've seen this issue occur multiples times)
delete-workload-cluster job failed on https://gitlab.com/sylva-projects/sylva-core/-/jobs/7810457500 because metal3datatemplate resources remain in the workload cluster ns:
$ kubectl --kubeconfig management-cluster-kubeconfig delete ns ${ENV_NAME} --timeout 2m
namespace "rke2-capm3-virt" deleted
error: timed out waiting for the condition on namespaces/rke2-capm3-virt
In the rke2-capm3-virt Namespace resource status:
- lastTransitionTime: "2024-09-12T16:31:51Z"
message: 'Some resources are remaining: metal3datatemplates.infrastructure.cluster.x-k8s.io
has 2 resource instances'
reason: SomeResourcesRemain
status: "True"
type: NamespaceContentRemaining
- lastTransitionTime: "2024-09-12T16:31:51Z"
message: 'Some content in the namespace has finalizers remaining: metal3datatemplate.infrastructure.cluster.x-k8s.io
in 2 resource instances'
reason: SomeFinalizersRemain
status: "True"
type: NamespaceFinalizersRemaining