mgmt-compliace-scan job failing in CI on upgrade from previous versions
Job #12362776612 failed for e6995bfc:
if kubectl --kubeconfig management-cluster-kubeconfig get namespace compliance-operator-system &>/dev/null; then # collapsed multi-line command
No report found
!6224 (merged) was an improvement and it fixed fresh installs. However, there seems to be an issue on upgrade from previous versions for this job, because no clusterscans are being found.
The strange thing is that the operator logs are showing the opposite:
=== Logs from pod/compliance-operator-55bcf7b7bc-wgncn ===
time="2025-12-08T22:42:32Z" level=info msg="Starting compliance-operator"
time="2025-12-08T22:42:33Z" level=info msg="ClusterProvider detected rke2"
time="2025-12-08T22:42:33Z" level=info msg="KubernetesVersion detected v1.33.5+rke2r1"
time="2025-12-08T22:42:33Z" level=info msg="Starting compliance.cattle.io/v1, Kind=ClusterScan controller"
time="2025-12-08T22:42:33Z" level=info msg="Starting /v1, Kind=ConfigMap controller"
time="2025-12-08T22:42:33Z" level=info msg="Starting /v1, Kind=Service controller"
time="2025-12-08T22:42:33Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2025-12-08T22:42:33Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2025-12-08T22:42:46Z" level=info msg="Launching a new on demand Job for scan rke2-compliance to run compliance using profile rke2-cis-1.11-profile"
time="2025-12-08T22:48:13Z" level=info msg="Marking ClusterScanConditionRunCompleted for scan: rke2-compliance"
time="2025-12-08T22:48:13Z" level=info msg="Marking ClusterScanConditionComplete for scan: rke2-compliance"
time="2025-12-08T22:48:13Z" level=info msg="Marking ClusterScanConditionAlerted for scan: rke2-compliance"
So the scan is completed, but deleted afterwards. We need to understand why this is happening.