It's part of libreswan's IPsec test framework. There are hundreds of these interfaces and I'd like them to not have a firewall:
the're isolated
arguably, the firewall could get in the way of some of the more vindictive tests
trying to create/start all these interfaces has sub-optimal performance while all the firewalls are configured
but I can't see a way to do this.
They are created using net-create (they were created using define/start which dug an even deeper hole after a reboot).
The only potential workaround I know is to remove firewalld but that strikes me as wrong as I really like the firewall rules on the non-isolated networks.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related or that one is blocking others.
Learn more.
As with mode='route', guest network traffic will be forwarded to the physical network via the host's IP routing stack, but there will be no firewall rules added to either enable or prevent any of this traffic. When forward='open' is set, the dev attribute cannot be set (because the forward dev is enforced with firewall rules, and the purpose of forward='open' is to have a forwarding mode where libvirt doesn't add any firewall rules). This mode presumes that the local LAN router has suitable routing table entries to return traffic to this host, and that some other management system has been used to put in place any necessary firewall rules. Although no firewall rules will be added for the network, it is of course still possible to add restrictions for specific guests using nwfilter rules on the guests' interfaces.) Since 2.2.0
open is under forward, where guest network traffic will be forwarded to the physical network via the host's IP routing stack.
My understanding was that it should only be added when the network, in some way, should be connected to the host. Here the networks (they all look the same) need to be isolated.
This network isn't defining any <forward> element, so I was a little surprised that it resulted in any firewall interaction at all. Looking at what libvirt does, it appears we're merely punching holes for DNS and DHCP in the host firewall.
In a firewalld nftables world, this hole punching is redundant, since it (a) won't actually successfully punch any holes (b) we're add the interface to firewalld 'libvirt' zone which will punch the holes for everything in the zone.
Thus libvirt should not be creating these rules at all, and indeed, with our newly proposed native 'nftables' impl, we'll no longer do this
IOW, once the above patch set merges, I believe that your example bridge network XML here will not involve any firewall changes from libvirt's side. Possibly this might be enough to satisfy your needs, unless the act of attaching the bridge interface to a firewalld zone gets in your way, but you can control that with the 'zone' attribute on the bridge element.
Even when the tftp/dns/dhcp rules for each network are eliminated, an "isolated" network will still have rules added that 1) permit traffic between guests on the same network, and 2) reject traffic between guests on that network and any other network on the same host.
So simply switching to the upcoming nftables backed isn't going to eliminate all firewall rules added for each virtual network.
If you really want to have the networks have 0 associated firewall rules, then the proper solution is to use forward mode='open' - the only difference between an isolated network and a foreward mode='open' network is that the latter has no firewall rules. That is not going to change with the nftables backend.
What makes an "isolated" network isolated is the REJECT rules that are added, and nothing else. But in the case of forward mode='open', as long as nobody on the network beyond the virtualization host has a route entry pointing back to the subnet defined for the network, then there can be no communication between the guests on that network and the outside anyway (well, the guests could send out SYN packets, but the SYN-ACK would never get back to them because the recipient wouldn't know that the response packet was supposed to be sent to the virtualization host, and so it would just go to the default route's next hop instead, then "disappear") - this is exactly equivalent to an "isolated" network (no forward element) with all the firewall rules removed - it is the firewall rules themselves, and nothing else, that differentiate between the alternate forward modes.
So I think that forward mode='open' is exactly what you want, and that isn't going to change with the nftables backend.
My understanding was that it should only be added when the network,
in some way, should be connected to the host. Here the networks
(they all look the same) need to be isolated.
I think you may be under the misconception that an isolated virtual network is isolated from the host as well as from the rest of the network. That is not the case. Guests on an isolated network can communicate amongst themselves, and also with the host. This is noted here:
If you really truly need the guests to be incapable of communicating with the host via IP, then you will need to create a virtual network with no IP addresses in its config, and then either setup a guest that runs a DHCP/DNS server, or alternately use static IP config on all the guests.
If you really truly need the guests to be incapable of communicating with the host via IP, then you will need to create a virtual network with no IP addresses in its config, and then either setup a guest that runs a DHCP/DNS server, or alternately use static IP config on all the guests.
Ah, and now I've finally read the full original report rather than just clicking on the link from the notification email and scanning through it.
Even what you have there will still add the rules that prevent communication between virtual networks on the same host, since those rules are interface-based rather than IP-based, so they would still be added even if the network didn't have an IP address.
To make sure there are no rules added, and IP communication with the host isn't possible, you'd still need to add
[cagney@bernard git-crypto main│+1Δ1]$ sudo virsh --connect=qemu:///system net-create /home/libreswan/pool/c.192_0_1.tmperror: Failed to create network from /home/libreswan/pool/c.192_0_1.tmperror: XML error: open forwarding requested, but no IP address provided for network 'c.192_0_1'
sigh :-/ Sorry I didn't actually try it first (spent enough time looking through the code at what would happen if that was the config, just didn't think to look at whether or not that config would be allowed).
So this is a bonafide bug, and needs to be addressed right away. I'll dig out the offending code and send a patch today. (actually it looks like the code that does this too-smart-for-its-own-good validation is in the <interface> element parser rather than in the separate validation function that's supposed to contain all checks for conflicting config; seeing that the code was written prior to the existence of separate validation functions, I'm not surprised)
Even a patch upstream today doesn't immediately solve the problem for you though. I'm trying to think of the simplest way for you to work around the issue. I thought I'd come up with a couple ways, but they each ended up missing one or two details, so I won't bore you with them. The one way I can think of that should satisfy all your requirements would be to pre-create all the bridge devices with NetworkManager (using nmcli), then either directly use <interface type='bridge'> in the guest config (instead of type='network') and reference the bridge name (<source bridge='blah01'/>), or alternately create libvirt virtual networks for each bridge that just use the existing bridge that you created with nmcli ( https://libvirt.org/formatnetwork.html#using-an-existing-host-bridge ).
I'll update here when I send the patch that eliminates the excess validation.