issue on trident-csi units, not finding TridentBackendConfig CRD
Example from https://gitlab.com/sylva-projects/sylva-core/-/jobs/13054748953:
2026/02/10 13:38:56.659605 Unit timeout exceeded: unit Kustomization/trident-csi-storage-class-ontap-san did not became ready after 5m0s
Details on Kustomization/trident-csi-storage-class-ontap-san and related resources:
IDENTIFIER STATUS REASON MESSAGE
Kustomization/sylva-system/trident-csi-storage-class-ontap-san InProgress Kustomization generation is 1, but latest observed generation is -1
╰┄╴┬┄┄[Conditions]
├┄╴Reconciling True ProgressingWithRetry Detecting drift for revision 0.0.0-git-9852a464@sha256:3e917aed43c2320c9938cef29fb42bdf0dcafa9788df3d98963c6b745452d5fa with a timeout of 30s
╰┄╴Ready False ReconciliationFailed TridentBackendConfig/trident/backend-tbc-ontap-san dry-run failed: no matches for kind "TridentBackendConfig" in version "trident.netapp.io/v1"
This happens despite the fact that the trident unit, which is responsible for installing, has reconciled successfully long before the timeout.
The CRD is indeed absent.
Looking more closely, that TridentBackendConfig CRD is not directly installed by the trident-operator Helm chart, it is installed by the operator itself, which in the case here, is failing.
It's failing because it fails to create a transient-trident-version-pod special pod:
2026-02-10T13:33:49Z 2026-02-10T13:35:56Z kubelet-mgmt-2316602081-rke2-capm3-virt-management-md-0 Pod transient-trident-version-pod 9 BackOff "Back-off pulling image ""docker.io/netapp/trident:25.10.0"""
This auxiliary pod isn't waited for by the Helm release, so nothing is waiting for it.
To better catch this kind of problem, we should add a trident-ready unit, as a dummy Kustomization unit, with a healthCheck waiting for the presence of the TridentBackendConfig CRD.