Firewall rules for openstack clusters using an external network

What does this MR do and why?

When a cluster is deployed on openstack behind a virtual router attached to an exposed network (external_network), packets issued from the cluster IP use the floating IP associated with the cluster IP port. In the same way, packets issued from a pod or node and sent to an external target use the IP of the router on the external network when SNAT is enabled (the typical case).

This MR takes into consideration the router configuration for cluster deployed with an external_network in the definition of globalnetworksets and globalnetworkpolicies

It adds a new value external_network_router_snat_ip that provides the router's IP address on the external_network in case the router is configured with snat: true.

external-network-router-snat-ip-to-vip.yaml creates a policy accepting packet from the router's external address to the cluster's IP. This packets are received when a component of the local cluster tries to reach a local ingress, as they are resolved to the cluster's external floating IP and hence the router performs SNAT.

configmap-for-capo-management-cluster-with-fip.yaml puts in the configmap management-cluster-addresses the management cluster's floating IP plus the router's external IP, as workload clusters would receive packets from either of these addresses instead of receiving them from the node's private IP addresses. This policy is executed only if the management cluster has a FIP

configmap-for-capo-management-cluster-without-fip.yaml is just a renaming of the file of the former policy, which stores in the configmap the node's IP addresses, except that it is executed only when no cluster FIP is present (meaning the nodes are directly accessible from external networks and no SNAT is applied).

Similarly globalnetworkset-for-capo-workload-clusters-with-fip.yaml creates for each new workload cluster a globalnetworkset with the cluster's floating IP plus its router's external IP, when the workload cluster has a FIP.

globalnetworkset-for-capo-workload-clusters-without-fip.yaml is a renaming of the previous policy and is executed only if the workload cluster has no FIP.

Closes #2530 (closed)

Test coverage

Manual deployment of management and workload cluster with a FIP, and check in the log files of the nodes that no packets would have been dropped.

CI configuration

Below you can choose test deployment variants to run in this MR's CI.

Click to open to CI configuration

Legend:

Icon Meaning Available values
☁️ Infra Provider capd, capo, capm3
🚀 Bootstrap Provider kubeadm (alias kadm), rke2
🐧 Node OS ubuntu, suse
🛠️ Deployment Options light-deploy, dev-sources, ha, misc, maxsurge-0, logging, no-logging
🎬 Pipeline Scenarios Available scenario list and description
  • 🎬 preview ☁️ capd 🚀 kadm 🐧 ubuntu

  • 🎬 preview ☁️ capo 🚀 rke2 🐧 suse

  • 🎬 preview ☁️ capm3 🚀 rke2 🐧 ubuntu

  • ☁️ capd 🚀 kadm 🛠️ light-deploy 🐧 ubuntu

  • ☁️ capd 🚀 rke2 🛠️ light-deploy 🐧 suse

  • ☁️ capo 🚀 rke2 🐧 suse

  • ☁️ capo 🚀 kadm 🐧 ubuntu

  • ☁️ capo 🚀 rke2 🎬 rolling-update 🛠️ ha 🐧 ubuntu

  • ☁️ capo 🚀 kadm 🎬 wkld-k8s-upgrade 🐧 ubuntu

  • ☁️ capo 🚀 rke2 🎬 rolling-update-no-wkld 🛠️ ha 🐧 suse

  • ☁️ capo 🚀 rke2 🎬 sylva-upgrade-from-1.4.x 🛠️ ha 🐧 ubuntu

  • ☁️ capo 🚀 rke2 🎬 sylva-upgrade-from-1.4.x 🛠️ ha,misc 🐧 ubuntu

  • ☁️ capo 🚀 rke2 🛠️ ha,misc 🐧 ubuntu

  • ☁️ capm3 🚀 rke2 🐧 suse

  • ☁️ capm3 🚀 kadm 🐧 ubuntu

  • ☁️ capm3 🚀 kadm 🎬 rolling-update-no-wkld 🛠️ ha,misc 🐧 ubuntu

  • ☁️ capm3 🚀 rke2 🎬 wkld-k8s-upgrade 🛠️ ha 🐧 suse

  • ☁️ capm3 🚀 kadm 🎬 rolling-update 🛠️ ha 🐧 ubuntu

  • ☁️ capm3 🚀 rke2 🎬 sylva-upgrade-from-1.4.x 🛠️ ha 🐧 suse

  • ☁️ capm3 🚀 rke2 🛠️ misc,ha 🐧 suse

  • ☁️ capm3 🚀 rke2 🎬 sylva-upgrade-from-1.4.x 🛠️ ha,misc 🐧 suse

  • ☁️ capm3 🚀 kadm 🎬 rolling-update 🛠️ ha 🐧 suse

  • ☁️ capm3 🚀 ck8s 🎬 no-wkld 🛠️ light-deploy 🐧 ubuntu

Global config for deployment pipelines

  • autorun pipelines
  • allow failure on pipelines
  • record sylvactl events

Notes:

  • Enabling autorun will make deployment pipelines to be run automatically without human interaction
  • Disabling allow failure will make deployment pipelines mandatory for pipeline success.
  • if both autorun and allow failure are disabled, deployment pipelines will need manual triggering but will be blocking the pipeline

Be aware: after configuration change, pipeline is not triggered automatically. Please run it manually (by clicking the run pipeline button in Pipelines tab) or push new code.

Edited by Thomas Morin

Merge request reports

Loading