Draft: Add CI jobs for testing rke2 node annotations
What does this MR do and why?
-
Introduces CI jobs, that are supposed to only run for RKE2 CI pipelines, that would test that all CP and MD nodes of the workload cluster have an expected test annotation - added initially because
RKE2ConfigTemplate.spec.template.spec.agentConfig.nodeAnnotationsdoes not result in any annotations natively and it's worked around within sylva-capi-cluster byRKE2ConfigTemplate.spec.template.spec.postRKE2Commands, which should change at some point and these jobs would allow for testing it, but please inform if you don't think we want them and I can drop the idea;- Note 1: These node annotations jobs are possible post sylva-capi-cluster upstream work in sylva-projects/sylva-elements/helm-charts/sylva-capi-cluster!208 (merged))
- Note 2: A general rework of labels and annotations (today there are differences in what CABPR does between CP and MD, while CABPK does not support declarative configuration of neither node labels nor annotations) is to be done under sylva-projects/sylva-elements/helm-charts/sylva-capi-cluster#103 (closed), after which we could extend these jobs for Kubeadm CI pipelines also.
-
Does a change to the API commands used to fetch the Rancher kubeconfig in the
test_no_sso, in 4a017a1e, by moving fromjqtoyq, simply because even though the defaultCI_IMAGE: registry.gitlab.com/sylva-projects/sylva-elements/container-images/ci-image:v1.0.15has the binary now, in capm3-rke2-virt CIs inside Equinix, the jobs are based on a shell executor where the runner environment inside the Equinix server is defined in https://gitlab.com/sylva-projects/sylva-elements/ci-tooling/runner-aas/-/blob/main/user-data.sh.tpl?ref_type=heads, which already uses themikefarah/yq, this way avoiding to introducejqthere.
Works on top of the change proposed in !1288 (merged).
Related reference(s)
Will probably need an evolution for the issue !1488 (closed) tries to solve, when connecting to the workload cluster inside libvirt-metal environments, unless a insecure flag for kubectl can be used.