Bug 705238 - Xen guest with routed virtual network could ping public
Summary: Xen guest with routed virtual network could ping public
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: libvirt
Version: 5.7
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: rc
: ---
Assignee: Libvirt Maintainers
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-05-17 05:39 UTC by zhanghaiyan
Modified: 2011-06-10 08:38 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-06-03 18:02:41 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description zhanghaiyan 2011-05-17 05:39:01 UTC
Description of problem:
Start a xenpv/xenfv guest with routed virtual network, but the guest still could
ping to public 

Version-Release number of selected component (if applicable):
2.6.18-259.el5xen
xen-3.0.3-130.el5
libvirt-0.8.2-20.el5

How reproducible:
4/4

Steps to Reproduce:
1.Define and start an routed virtual network
# virsh net-list --all
Name                 State      Autostart
-----------------------------------------
br1                  active     no        
default              active     yes       

# virsh net-dumpxml br1
<network>
  <name>br1</name>
  <uuid>8a2e947d-c1d5-4695-2881-5f877ced63e0</uuid>
  <forward mode='route'/>
  <bridge name='br1' stp='on' delay='0' />
  <ip address='192.168.100.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.100.2' end='192.168.100.254' />
    </dhcp>
  </ip>
</network>

2. Define and start a guest using the routed virtual network
# virsh list --all
 Id Name                 State
----------------------------------
  0 Domain-0             running
 48 xenfv-rhel5u6-x86_64 idle

# virsh dumpxml xenfv-rhel5u6-x86_64
.......
    <interface type='bridge'>
      <mac address='00:16:3e:d2:c1:41'/>
      <source bridge='xenbr0'/>
      <script path='vif-bridge'/>
      <target dev='vif48.0'/>
      <model type='rtl8139'/>
    </interface>
......
3. Log in the guest and ping to google.com
4. # iptables -vnL FORWARD
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      br1     0.0.0.0/0            192.168.100.0/24    
    0     0 ACCEPT     all  --  br1    *       192.168.100.0/24     0.0.0.0/0           
    0     0 ACCEPT     all  --  br1    br1     0.0.0.0/0            0.0.0.0/0           
    0     0 REJECT     all  --  *      br1     0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
    0     0 REJECT     all  --  br1    *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
1828K 2633M ACCEPT     all  --  *      virbr0  0.0.0.0/0            192.168.122.0/24    state RELATED,ESTABLISHED 
 465K   26M ACCEPT     all  --  virbr0 *       192.168.122.0/24     0.0.0.0/0           
    0     0 ACCEPT     all  --  virbr0 virbr0  0.0.0.0/0            0.0.0.0/0           
    0     0 REJECT     all  --  *      virbr0  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
    0     0 REJECT     all  --  virbr0 *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 br1
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
10.66.82.0      0.0.0.0         255.255.254.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
0.0.0.0         10.66.83.254    0.0.0.0         UG    0      0        0 eth0

# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535 
MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535 
MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24    

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination 
  
Actual results:
3. Could ping to public
(but seems in step4, the iptables rule is correct)

Expected results:
3. Cannot ping to public 

Additional info:
On kvm, the guest cannot ping to public as expected

Comment 1 Daniel Berrangé 2011-05-17 08:37:38 UTC
Using the 'routed' network setup is *not* zero-conf, like the 'nat' setup is - it requires manual configuration of routing by the LAN admin.

Have you configured the physical LAN router so that it knows to route traffic for '192.168.122.0/24'  via the virtualization host ?


eg, on your LAN router you need something like

# ip route add 192.168.122.0/24 via 10.66.82.XXXX

(which XXXX is your physical host's IP addr)

For more info see

http://berrange.com/posts/2009/12/13/routed-subnets-without-nat-for-libvirt-managed-virtual-machines-in-fedora/

Comment 2 zhanghaiyan 2011-05-18 08:06:38 UTC
DB, the bug is saying before adding route rule on host B, guest on host A could ping to host B. That is not expected.

Comment 3 Daniel Berrangé 2011-05-18 08:36:44 UTC
> DB, the bug is saying before adding route rule on host B, guest on host A could
> ping to host B. That is not expected.

Actually that is expected. Outbound a guest should be able to reach any remote host, that its local host can reach. There is no inbound connectivity to the guest until the routes are added on the LAN router.  So a guest on host A, can access host B, but it cannot access a guest on host B that uses routed network.

Comment 4 zhanghaiyan 2011-05-18 10:31:15 UTC
DB, could you please help explain why the test result is different between kvm and xen ?

On kvm
1) before adding route rule on host B
                 ping fail
                ------------>
guest on host A              host B   
                <------------
                 ping fail

2) after adding route rule on host B
                 ping pass
                ------------>
guest on host A              host B   
                <------------
                 ping pass

On xen
1) before adding route rule on host B

                 ping pass
                ------------>
guest on host A              host B   
                <------------
                 ping fail

2) after adding route rule on host B
                 ping pass
                ------------>
guest on host A              host B   
                <------------
                 ping fail

Comment 5 Huming Jiang 2011-05-31 04:32:06 UTC
Could not reproduce this bug on the following components:
kernel-xen-2.6.18-238.el5
libvirt-0.8.2-15.el5
virt-manager-0.6.1-13.el5

Steps to Reproduce:
1.Define and start an routed virtual network
# virsh net-list --all
Name                 State      Autostart
-----------------------------------------
default              active     yes       
route                active     yes       

# virsh net-dumpxml br1
<network>
  <name>route</name>
  <uuid>71fe5d83-5361-10c7-b277-797d1fe9350c</uuid>
  <forward mode='route'/>
  <bridge name='virbr1' stp='on' delay='0' />
  <ip address='192.168.100.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.100.128' end='192.168.100.254' />
    </dhcp>
  </ip>
</network>


2. Define and start a guest using the routed virtual network
# virsh list --all
 Id Name                 State
----------------------------------
  0 Domain-0             running
  4 pv6.1                idle
  - a                    shut off


# virsh dumpxml pv6.1
.......
    <interface type='bridge'>
      <mac address='00:16:36:29:60:e6'/>
      <source bridge='virbr1'/>
      <script path='vif-bridge'/>
      <target dev='vif4.0'/>
    </interface>
......
3. Log in the guest and ping to google.com
# ping google.com
PING google.com (74.125.91.99) 56(84) bytes of data.
(no response)
4. # iptables -vnL FORWARD
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.100.0   0.0.0.0         255.255.255.0   U     1      0        0 eth0
0.0.0.0         192.168.100.1   0.0.0.0         UG    0      0        0 eth0


# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Comment 7 Laine Stump 2011-06-03 18:02:41 UTC
In both test cases the configuration is not correct, and I believe this is leading to improper results.

The configuration in comment 1 is creating a libvirt virtual network named "br1" that creates a bridge device called "br1", then in the domain's interface XML is telling the guest to connect to a bridge device called *"xenbr0"* - a completely unrelated device!

Beyond that, in both cases, rather than using the standard accepted method of connecting a guest's interface to a libvirt virtual network, eg:

   **CORRECT**
   <interface type='network'>
     <forward mode='routed'/>
     <source network='br1'/>
     <mac address=='.....'/>
   </interface>
   **CORRECT**

(see http://wiki.libvirt.org/page/Networking) (and by the way, avoid using common names like "br1". Use something like "isolated" for the name of the network instead, to avoid confusion), you are connecting to the *bridge* that libvirt created for its virtual network as if it were a bridge created by the host's system config files:

   **WRONG**
   <interface type='bridge'>
     <source bridge='br1'/>
     ...
   **WRONG**

Finally, I'm not sure why you've included the

       <script path='vif-bridge'/>
       <target dev='vif32.0'/>

lines. These look like something that could be used to bypass libvirt's own virtual networks and connect to the network via an alternate method provided by Xen.

Because your test scenario seems to both incorrect and also not related to recommended/supported methods for using libvirt's virtual networks, I am closing this bug.

Please redo your testing in the following manner, and file a new bug (or re-open this bug if the results are the same) if you still experience a failure:


1) Create routed virtual network like this:

  <network>
    <name>routed</name>
    <forward mode='route'/>
    <ip address='192.168.201.1'>
      <dhcp>
        <range start='192.168.201.2' end='192.168.201.254'/>
      </dhcp>
    </ip>
  </network>

(NB: libvirt will auto-generate a unique uuid and bridge device name)

2) configure your guest domain's interface XML like this (make sure there is only *1* <interface> element in the domain's XML!):

   <interface type='network'>
     <source network='routed'/>
   </interface>

(NB: libvirt will auto-generate a unique mac address for the interface)

Now re-run the test.

Comment 9 zhanghaiyan 2011-06-10 08:29:06 UTC
According to comment 7 guide, I have re-tested this bug, the route virtual network works well in xen guest. But a little information to update

After adding the following network interface xml into guest config xml
     <interface type='network'>
          <source network='routed'/>
     </interface>
it is changed to bridge type automatically in guest config xml as below
    <interface type='bridge'>
      <mac address='00:16:3e:2e:b9:ef'/>
      <source bridge='virbr1'/>
      <script path='vif-bridge'/>
      <target dev='vif5.0'/>
    </interface>
I think it is correct, because bridge 'virbr1' is the same one with network 'routed'
# virsh net-dumpxml routed
<network>
  <name>routed</name>
  <uuid>3f462fe5-077c-9846-eee2-61f9652dfedb</uuid>
  <forward mode='route'/>
  <bridge name='virbr1' stp='on' delay='0' />
  <ip address='10.0.0.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='10.0.0.2' end='10.0.0.254' />
    </dhcp>
  </ip>
</network>

Comment 10 zhanghaiyan 2011-06-10 08:38:54 UTC
The route virtual network works well means 
1) before adding route rule into host B, guest A could not ping to host B
2) after adding route rule into host B, guest A could ping to host B


Note You need to log in before you can comment on or make changes to this bug.