Bug 2055707

Summary: firewalld backend: routed networks don't allow incoming (to libvirt) traffic
Product: Red Hat Enterprise Linux 9 Reporter: Eric Garver <egarver>
Component: libvirtAssignee: Laine Stump <laine>
libvirt sub component: Networking QA Contact: yalzhang <yalzhang>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: medium    
Priority: medium CC: jsuchane, laine, lvivier, virt-maint, xuzhang, yanqzhan
Version: 9.0Keywords: Triaged
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-07-24 19:33:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Eric Garver 2022-02-17 14:56:10 UTC
Firewalld v1.0.0 included breaking changes [1] that affect the libvirt
firewalld backend when using a routed network.

Routed networks worked pre v1.0.0 because of firewalld issue 177 [2].
This was fixed in v1.0.0. Now packets world -> libvirt are blocked.

Assuming we want to address the situation, I think libvirt should use a
firewalld policy to expose the routed VM network.

  # firewall-cmd --permanent --new-policy libvirt-fwd-in
  # firewall-cmd --permanent --policy libvirt-fwd-in --add-ingress-zone ANY
  # firewall-cmd --permanent --policy libvirt-fwd-in --add-egress-zone libvirt
  # firewall-cmd --permanent --policy libvirt-fwd-in --set-target ACCEPT

This policy could be shipped similar to the libvirt zone.

Of course, you don't want this policy to go active when using a NAT'd
configuration. To control that the shipped policy could omit the
egress-zone (making it inactive) and libvirt only adds it at runtime for
routed networks.

[1]: https://firewalld.org/2021/06/the-upcoming-1-0-0 "Default target is now similar to reject"
[2]: https://github.com/firewalld/firewalld/issues/177

Comment 2 yalzhang@redhat.com 2023-07-24 03:36:12 UTC
Test on libvirt-9.5.0-3.el9.x86_64, the issue is fixed.
Refer to bug 2055706#c3 for the test steps.

Comment 3 yalzhang@redhat.com 2023-07-24 04:43:16 UTC
Test on libvirt-9.5.0-3.el9.x86_64, the issue is fixed.

1. Prepare and start a network with forward mode='route':
# virsh net-dumpxml route 
<network>
  <name>route</name>
  <uuid>ad814f81-3ffe-4f3b-b662-1e997a524156</uuid>
  <forward mode='route'/>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:4e:26:38'/>
  <ip address='192.168.100.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.100.2' end='192.168.100.254'/>
    </dhcp>
  </ip>
<ip family='ipv6' address='2001:db8:ca2:2::1' prefix='64'>
    <dhcp>
      <range start='2001:db8:ca2:2::10' end='2001:db8:ca2:2::ff'/>
    </dhcp>
  </ip>
</network>

2. Start a vm with network type interface connected to this network;

3. On a remote host which can access to this local host, add a route like as below:
# ip route add  192.168.100.0/24  dev eno1 via ${ip_of_local_host}

4. On remote host, ping guest(both ipv4 and ipv6), succeed;

5. On guest, ping the remote host(both ipv4 and ipv6), succeed;

Comment 4 yalzhang@redhat.com 2023-07-24 04:46:01 UTC
Hi Laine, as the test results in comment 3 and what you have confirmed on bug 2055706#c5, I think we can close this bug as "current release". 
What do you think? Thank you!

Comment 5 Laine Stump 2023-07-24 19:33:57 UTC
Agreed, and done. Thanks for following up!