two-replicas-storageclass should be disabled on single node clusters

Summary

The two-replicas-storageclass unit introduced in !2928 (merged) is enabled whenever longhorn is enabled, but it doesn't check if there are enough nodes for it, leading to failures in the case of single-node baremetal deployments, because minio and monitoring units are trying to use this storageClass if the unit is enabled. We can solve this by checking that we have at least 2 Longhorn nodes.

Steps to reproduce

Deploy a single node management cluster with Longhorn configuration and monitoring enabled.

What is the current bug behavior?

[git:main]root@vbmh:sylva-core# kubectl --kubeconfig management-cluster-kubeconfig get pvc -A
NAMESPACE                  NAME                                                                                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  VOLUMEATTRIBUTESCLASS   AGE
...
cattle-monitoring-system   alertmanager-rancher-monitoring-alertmanager-db-alertmanager-rancher-monitoring-alertmanager-0   Bound    pvc-d9c130db-0eca-494a-9cda-9b835831045f   2Gi        RWO            two-replicas-storageclass     <unset>                 27m
cattle-monitoring-system   prometheus-rancher-monitoring-prometheus-db-prometheus-rancher-monitoring-prometheus-0           Bound    pvc-2820080e-1429-441b-8e4b-a4bc5f87f136   50Gi       RWO            two-replicas-storageclass     <unset>                 27m
...
minio-monitoring           data0-monitoring-pool-0-0                                                                        Bound    pvc-b59ca0c9-2b33-478c-8d12-5af4eacb7909   10Gi       RWO            two-replicas-storageclass     <unset>                 43m
minio-monitoring           data1-monitoring-pool-0-0                                                                        Bound    pvc-97e49a38-2c56-4b63-8f27-8da03f5a37a7   10Gi       RWO            two-replicas-storageclass     <unset>                 43m
minio-monitoring           data2-monitoring-pool-0-0                                                                        Bound    pvc-71108830-13b9-428b-b347-a900094f44eb   10Gi       RWO            two-replicas-storageclass     <unset>                 43m
minio-monitoring           data3-monitoring-pool-0-0                                                                        Bound    pvc-8eb5b5d4-788e-4b09-90f2-e990447a8170   10Gi       RWO            two-replicas-storageclass     <unset>                 43m

[git:main]root@vbmh:sylva-core# kubectl --kubeconfig management-cluster-kubeconfig -n cattle-monitoring-system get events
...
17m         Warning   FailedScheduling        pod/prometheus-rancher-monitoring-prometheus-0                                                                         0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
17m         Normal    Scheduled               pod/prometheus-rancher-monitoring-prometheus-0                                                                         Successfully assigned cattle-monitoring-system/prometheus-rancher-monitoring-prometheus-0 to management-cluster-server1
45s         Warning   FailedAttachVolume      pod/prometheus-rancher-monitoring-prometheus-0                                                                         AttachVolume.Attach failed for volume "pvc-2820080e-1429-441b-8e4b-a4bc5f87f136" : rpc error: code = Aborted desc = volume pvc-2820080e-1429-441b-8e4b-a4bc5f87f136 is not ready for workloads
17m         Normal    ExternalProvisioning    persistentvolumeclaim/prometheus-rancher-monitoring-prometheus-db-prometheus-rancher-monitoring-prometheus-0           Waiting for a volume to be created either by the external provisioner 'driver.longhorn.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Assignee Loading
Time tracking Loading