upgrade to 1.29 broke capd

We have this on capd jobs since the upgrade to 1.29.9:

Kustomization/sylva-system/cluster                                                         InProgress                         Kustomization generation is 1, but latest observed generation is -1
╰┄╴HelmRelease/sylva-system/cluster                                                        Ready                              Resource is Ready
   ├┄╴Cluster/sylva-system/mgmt-1495182728-kubeadm-capd                                    InProgress                         Scaling up control plane to 1 replicas (actual 0)
   ┆  ╰┄╴KubeadmControlPlane/sylva-system/mgmt-1495182728-kubeadm-capd-control-plane       InProgress                         Scaling up control plane to 1 replicas (actual 0)
   ┆     ╰┄╴Machine/sylva-system/mgmt-1495182728-kubeadm-capd-control-plane-8svqq          InProgress                         1 of 2 completed
   ┆        ╰┄╴DockerMachine/sylva-system/mgmt-1495182728-kubeadm-capd-control-plane-8svqq InProgress                         0 of 2 completed
   ┆           ╰┄╴┬┄┄[Conditions]
   ┆              ├┄╴Ready                                                                 False      WaitingForBootstrapData 0 of 2 completed
   ┆              ╰┄╴ContainerProvisioned                                                  False      WaitingForBootstrapData
   ╰┄╴KubeadmControlPlane/sylva-system/mgmt-1495182728-kubeadm-capd-control-plane          InProgress                         Scaling up control plane to 1 replicas (actual 0)
      ╰┄╴Machine/sylva-system/mgmt-1495182728-kubeadm-capd-control-plane-8svqq             InProgress                         1 of 2 completed
         ╰┄╴DockerMachine/sylva-system/mgmt-1495182728-kubeadm-capd-control-plane-8svqq    InProgress                         0 of 2 completed
            ╰┄╴┬┄┄[Conditions]
               ├┄╴Ready                                                                    False      WaitingForBootstrapData 0 of 2 completed
               ╰┄╴ContainerProvisioned                                                     False      WaitingForBootstrapData

for kubeadm:

E1014 14:29:14.850309       1 controller.go:329] "Reconciler error" err="failed to create worker DockerMachine: 
error pulling container image kindest/node:v1.29.9: 
failure pulling container image: 
Error response from daemon:
 manifest for kindest/node:v1.29.9 not found: 
manifest unknown: manifest unknown" 
controller="dockermachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="DockerMachine" DockerMachine="sylva-system/mgmt-1495182621-kubeadm-capd-control-plane-q2cdb" namespace="sylva-system" name="mgmt-1495182621-kubeadm-capd-control-plane-q2cdb" reconcileID="92d4f114-b030-48d1-98d6-223ca65cc2ac"

This breaks equally for rke2:

E1014 14:30:35.917441       1 controller.go:329] "Reconciler error" err="failed to create worker DockerMachine: 
error pulling container image registry.gitlab.com/sylva-projects/sylva-elements/container-images/rke2-in-docker:v1-29-9-rke2r1: 
failure pulling container image: 
Error response from daemon: 
manifest for registry.gitlab.com/sylva-projects/sylva-elements/container-images/rke2-in-docker:v1-29-9-rke2r1 not found: manifest unknown: 
manifest unknown" 
controller="dockermachine" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="DockerMachine" DockerMachine="sylva-system/mgmt-1495182655-rke2-capd-cp-6e4d3e02ff-2v6hh" namespace="sylva-system" name="mgmt-1495182655-rke2-capd-cp-6e4d3e02ff-2v6hh" reconcileID="3ddc8c81-6aab-442b-962f-c3b20f130ca8"
Edited Oct 14, 2024 by Thomas Morin
Assignee Loading
Time tracking Loading