☁ capo 🚀 kubeadm 🐧 suse 🛠 ha 🎬 nightly fails on tigera-operator
Our nighly :cloud:capo :rocket:kubeadm :penguin:suse :tools:ha :clapper:nightly is broken since to be broken since a few days as the folling occurences always stopped at the same stage on the deployment of management cluster.
Note that capo kubeadm ubuntu ha is not affected by this issue
Occurences:
- https://gitlab.com/sylva-projects/sylva-core/-/jobs/8135310971
- https://gitlab.com/sylva-projects/sylva-core/-/jobs/8131711111
- https://gitlab.com/sylva-projects/sylva-core/-/jobs/8128221256
- https://gitlab.com/sylva-projects/sylva-core/-/jobs/8117693512
024/10/21 11:02:47.023538 Kustomization/calico state changed: Progressing - Reconciliation in progress
2024/10/21 11:02:47.183202 Kustomization/calico state changed: HealthCheckFailed - health check failed after 21.798458ms: failed early due to stalled resources: [HelmRelease/sylva-system/calico status: 'Failed']
2024/10/21 11:03:00.564191 Command timeout exceeded
Timed-out waiting for the following resources to be ready:
IDENTIFIER STATUS REASON MESSAGE
Kustomization/sylva-system/calico InProgress Kustomization generation is 1, but latest observed generation is -1
╰┄╴HelmRelease/sylva-system/calico Failed Failed to install after 1 attempt(s)
╰┄╴Deployment/tigera-operator/tigera-operator Failed Progress deadline exceeded
╰┄╴ReplicaSet/tigera-operator/tigera-operator-86d587c767 InProgress Available: 0/1
╰┄╴Pod/tigera-operator/tigera-operator-86d587c767-49tzz InProgress Pod is in the Pending phase
├┄╴┬┄┄[Conditions]
┆ ├┄╴PodReadyToStartContainers False
┆ ├┄╴Initialized True
┆ ├┄╴Ready False ContainersNotReady containers with unready status: [tigera-operator]
┆ ├┄╴ContainersReady False ContainersNotReady containers with unready status: [tigera-operator]
┆ ╰┄╴PodScheduled True
╰┄╴┬┄┄[Events]
├┄╴2024-10-21 10:24:39 Normal Scheduled Successfully assigned tigera-operator/tigera-operator-86d587c767-49tzz to mgmt-1505091883-kubeadm-capo-md0-rldkc-7jwrf
├┄╴2024-10-21 10:55:12 (x24 over 27m13s) Warning FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = failed to get sandbox image "registry.k8s.io/pause:3.8": failed to pull image "registry.k8s.io/pause:3.8": failed to pull and unpack image "registry.k8s.io/pause:3.8": failed to resolve reference "registry.k8s.io/pause:3.8": failed to do request: Head "https://registry.k8s.io/v2/pause/manifests/3.8": dial tcp 34.96.108.209:443: i/o timeout
╰┄╴2024-10-21 11:00:11 (x20 over 35m2s) Warning FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "registry.k8s.io/pause:3.8": failed to pull image "registry.k8s.io/pause:3.8": failed to pull and unpack image "registry.k8s.io/pause:3.8": failed to resolve reference "registry.k8s.io/pause:3.8": failed to do request: Head "https://registry.k8s.io/v2/pause/manifests/3.8": dial tcp 34.96.108.209:443: i/o timeout
Kustomization/sylva-system/cluster-ready InProgress Kustomization generation is 1, but latest observed generation is -1
╰┄╴┬┄┄[Conditions]
├┄╴Reconciling True Progressing Running health checks for revision sha1:3dc65bed1b2b2b69fe14a30f0d353ea0245b4f62 with a timeout of 30s
├┄╴Ready Unknown Progressing Reconciliation in progress
╰┄╴Healthy Unknown Progressing Running health checks for revision sha1:3dc65bed1b2b2b69fe14a30f0d353ea0245b4f62 with a timeout of 30s
Edited by Marc Bailly