Proxy vars are causing rolling update on CI runs

While working on !4512 (merged) I have noticed that the update-management-cluster scenario is now triggering a rolling update on the cluster nodes. Going further, the same behavior was also observed on latest nightly CI runs.

This is due to the fact that when first instantiating the mgmt cluster, we include the kind pod and service addresses in the no_proxy, which we stop doing in update-management-cluster

This can also be seen in RKE2ControlPlane resource, as it follows:

from deploy-management-cluster

    - content: "HTTP_PROXY=http://172.20.136.219:3128\nHTTPS_PROXY=http://172.20.136.219:3128\nNO_PROXY=100.72.0.0/16,100.73.0.0/16,repos.tech.orange,docker,.cluster.local,.cluster.local.,.svc,.sylva,10.0.0.0/8,100.100.0.0/16,100.96.0.0/16,127.0.0.1,172.16.0.0/12,192.168.0.0/16,localhost

from update-management-cluster

    - content: "HTTP_PROXY=http://172.20.136.219:3128\nHTTPS_PROXY=http://172.20.136.219:3128\nNO_PROXY=100.72.0.0/16,100.73.0.0/16,repos.tech.orange,docker,.cluster.local,.cluster.local.,.svc,.sylva,10.0.0.0/8,127.0.0.1,172.16.0.0/12,192.168.0.0/16,localhost

We should also include the KIND pod and service addresses in the no_proxy, for update-management-cluster job, in order to prevent this from happening.

Assignee Loading
Time tracking Loading