canonical-k8s: management cluster upgrade fails

Summary

The nightly pipeline for Canonical Kubernetes fails at the update management cluster step.

More investigation is required, but the aparent issue is that there has been a recent change in the sylva-core / sylva-capi-cluster that introduced a regression for Canonical Kubernetes regarding the move of baremetalhost resources from the bootstrap to the management cluster.

Ironic on the management cluster in the case of libvirt-metal still uses the bootstrap IP ranges that are no longer available at this stage.

This is the error message I could see locally during the provisioning stage of the replaced machines:

2025-09-12 13:39:29 ::  [  162.018428] ironic-python-agent[779]: 2025-09-12 13:39:27.766 779 WARNING ironic_python_agent.ironic_api_client [-] Error detected while attempting to
 perform lookup with https://192.168.100.2:6385, retrying. Error: HTTPSConnectionPool(host='192.168.100.2', port=6385): Max retries exceeded with url: /v1/lookup?addresses=52%3A
54%3A00%3A44%3A44%3A00%2C52%3A54%3A00%3A55%3A55%3A00 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f1d8e77a250>: Failed to establish a new conn
ection: [Errno 113] EHOSTUNREACH')): requests.exceptions.ConnectionError: HTTPSConnectionPool(host='192.168.100.2', port=6385): Max retries exceeded with url: /v1/lookup?address
es=52%3A54%3A00%3A44%3A44%3A00%2C52%3A54%3A00%3A55%3A55%3A00 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f1d8e77a250>: Failed to establish a
new connection: [Errno 113] EHOSTUNREACH'))

Note that the IPA from the libvirt-metal VM on the boostrap cluster cannot reach the IP 192.168.100.2 that is only on the Management Cluster. I think this issue has been present at some moment, and it is a regression.

Also note that this error happens right after the apply.sh is executed, in order to trigger the rolling upgrade. There are initially 4 machines in the cluster, 3 Control Planes and one Machine Deployment. The rolling upgrade is triggered, 2 machines are removed from the cluster (1CP and 1MD) and then at the provisioning step of the 2 new machines, this error is present.

See: https://gitlab.com/sylva-projects/sylva-core/-/pipelines/2039059950

Follow-up of: #2695 (closed)