From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14 Description of problem: I have two VMs: A (attached to vnet1) and B (attached to vnet2). A can ssh to B, but B can't ssh to (or ping) A. That's the problem. B can ping the gateway for vnet1, but B can't ping A which is attached to vnet1. Both A and B can ssh to dom0, and both can access other nodes on the Internet. If I shut down the VMs and rewire A to vnet2 and B to vnet1, the problem stays with the vnets where now B can ssh to A, but A can't ssh to B. Both A and B can ssh to dom0, and both can access other nodes on the Internet. I have the following set in /etc/sysctl.conf: net.ipv4.ip_forward = 1 This worked in RHEL 5.1, but doesn't appear to work in RHEL 5.2. Version-Release number of selected component (if applicable): libvirt-0.3.3-7.el5 How reproducible: Always Steps to Reproduce: 1. Create two virtual networks, say vnet1 and vnet2. Here are mine: virsh # net-dumpxml 172_30_2_x <network> <name>172_30_2_x</name> <uuid>4893f67b-c3f5-42f0-a020-dab6b2815a66</uuid> <forward/> <bridge name='vnet1' stp='on' forwardDelay='0' /> <ip address='172.30.2.1' netmask='255.255.255.0'> <dhcp> <range start='172.30.2.100' end='172.30.2.254' /> </dhcp> </ip> </network> virsh # net-dumpxml 172_30_3_x <network> <name>172_30_3_x</name> <uuid>d1c55a64-8c15-be77-6546-15233e491912</uuid> <forward/> <bridge name='vnet2' stp='on' forwardDelay='0' /> <ip address='172.30.3.1' netmask='255.255.255.0'> <dhcp> <range start='172.30.3.100' end='172.30.3.254' /> </dhcp> </ip> </network> 2. Create two virtual machines (say A and B) with one network interface each. Set one VM's network interface to bridge=vnet1 and the other VM's network interface to bridge=vnet2. 3. Start the two virtual machines A and B. Actual Results: A will be able to ssh to B. B will not be able to ssh to A. B will be able to ping the gateway for vnet1 but not be able to access any VMs attached to vnet1 such as A. Both A and B can ssh to dom0, and both can access other nodes on the Internet. From the VM on vnet1 (can ping VM on vnet2): ~~~ [root@localhost ~]# ping 172.30.3.109 PING 172.30.3.109 (172.30.3.109) 56(84) bytes of data. 64 bytes from 172.30.3.109: icmp_seq=1 ttl=63 time=0.216 ms 64 bytes from 172.30.3.109: icmp_seq=2 ttl=63 time=0.173 ms ~~~ From the VM on vnet2 (can't ping VM on vnet1, but can ping the vnet1 gateway): ~~~ [root@localhost ~]# ping 172.30.2.170 PING 172.30.2.170 (172.30.2.170) 56(84) bytes of data. From 172.30.3.1 icmp_seq=1 Destination Port Unreachable From 172.30.3.1 icmp_seq=2 Destination Port Unreachable ... [root@localhost ~]# ping 172.30.2.1 PING 172.30.2.1 (172.30.2.1) 56(84) bytes of data. 64 bytes from 172.30.2.1: icmp_seq=1 ttl=64 time=0.140 ms 64 bytes from 172.30.2.1: icmp_seq=2 ttl=64 time=0.115 ms ~~~ Expected Results: A will be able to ssh to B, and B will be able to ssh into A. Both A and B can access dom0 and other nodes on the Internet. Additional info:
Please provide the config files for the 2 virtual machines from /etc/xen. Also include the output of 'virsh dumpxml DOMAINNAME' for each virtual machine Finally, the output of 'iptables -L -n -v' and 'iptables -t nat -L -n -v'
VM A is the VM called "test" and is currently connected to vnet1. VM B is the VM called "rhel5" and is currently connected to vnet2. root@koloff:/etc/xen:2# cat test name = "test" uuid = "0aa0c6a9-d716-6330-e5dd-8bbd7fbf0662" maxmem = 256 memory = 256 vcpus = 1 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ "type=vnc,vncunused=1,keymap=en-us" ] disk = [ "tap:aio:/var/lib/xen/images/test.img,xvda,w" ] vif = [ "mac=00:16:3e:1c:7e:b1,bridge=vnet1" ] root@koloff:/etc/xen:3# cat rhel5 name = "rhel5" uuid = "e3b6a84a-8210-5755-dfac-169d7edba959" maxmem = 512 memory = 512 vcpus = 2 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ "type=vnc,vncunused=1,keymap=en-us" ] disk = [ "tap:aio:/var/lib/xen/images/rhel5.img,xvda,w" ] vif = [ "mac=00:16:3e:3c:6d:d2,bridge=vnet2" ] root@koloff:/etc/xen:4# virsh dumpxml test <domain type='xen'> <name>test</name> <uuid>0aa0c6a9-d716-6330-e5dd-8bbd7fbf0662</uuid> <bootloader>/usr/bin/pygrub</bootloader> <currentMemory>262144</currentMemory> <memory>262144</memory> <vcpu>1</vcpu> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <disk type='file' device='disk'> <driver name='tap' type='aio'/> <source file='/var/lib/xen/images/test.img'/> <target dev='xvda'/> </disk> <interface type='bridge'> <mac address='00:16:3e:1c:7e:b1'/> <source bridge='vnet1'/> </interface> <input type='mouse' bus='xen'/> <graphics type='vnc' port='-1' keymap='en-us'/> <console/> </devices> </domain> root@koloff:/etc/xen:5# virsh dumpxml rhel5 <domain type='xen'> <name>rhel5</name> <uuid>e3b6a84a-8210-5755-dfac-169d7edba959</uuid> <bootloader>/usr/bin/pygrub</bootloader> <currentMemory>524288</currentMemory> <memory>524288</memory> <vcpu>2</vcpu> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <disk type='file' device='disk'> <driver name='tap' type='aio'/> <source file='/var/lib/xen/images/rhel5.img'/> <target dev='xvda'/> </disk> <interface type='bridge'> <mac address='00:16:3e:3c:6d:d2'/> <source bridge='vnet2'/> </interface> <input type='mouse' bus='xen'/> <graphics type='vnc' port='-1' keymap='en-us'/> <console/> </devices> </domain> root@koloff:/etc/xen:6# iptables -L -n -v Chain INPUT (policy ACCEPT 2322 packets, 2509K bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT udp -- vnet1 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- vnet1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- vnet1 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67 0 0 ACCEPT tcp -- vnet1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 0 0 ACCEPT udp -- vnet0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- vnet0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- vnet0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67 0 0 ACCEPT tcp -- vnet0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 0 0 ACCEPT udp -- vnet2 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- vnet2 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- vnet2 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67 0 0 ACCEPT tcp -- vnet2 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 0 0 ACCEPT udp -- vnet3 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- vnet3 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- vnet3 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67 0 0 ACCEPT tcp -- vnet3 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * vnet1 0.0.0.0/0 172.30.2.0/24 state RELATED,ESTABLISHED 0 0 ACCEPT all -- vnet1 * 172.30.2.0/24 0.0.0.0/0 0 0 ACCEPT all -- vnet1 vnet1 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- * vnet1 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 REJECT all -- vnet1 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 state RELATED,ESTABLISHED 0 0 ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0 0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT all -- * vnet0 0.0.0.0/0 172.30.1.0/24 state RELATED,ESTABLISHED 0 0 ACCEPT all -- vnet0 * 172.30.1.0/24 0.0.0.0/0 0 0 ACCEPT all -- vnet0 vnet0 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- * vnet0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 REJECT all -- vnet0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT all -- * vnet2 0.0.0.0/0 172.30.3.0/24 state RELATED,ESTABLISHED 0 0 ACCEPT all -- vnet2 * 172.30.3.0/24 0.0.0.0/0 0 0 ACCEPT all -- vnet2 vnet2 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- * vnet2 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 REJECT all -- vnet2 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT all -- * vnet3 0.0.0.0/0 172.30.4.0/24 state RELATED,ESTABLISHED 0 0 ACCEPT all -- vnet3 * 172.30.4.0/24 0.0.0.0/0 0 0 ACCEPT all -- vnet3 vnet3 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- * vnet3 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 REJECT all -- vnet3 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 1596 packets, 135K bytes) pkts bytes target prot opt in out source destination root@koloff:/etc/xen:7# iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 36 packets, 8780 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 31 packets, 1975 bytes) pkts bytes target prot opt in out source destination 1 141 MASQUERADE all -- * * 172.30.2.0/24 0.0.0.0/0 1 141 MASQUERADE all -- * * 192.168.122.0/24 0.0.0.0/0 1 141 MASQUERADE all -- * * 172.30.1.0/24 0.0.0.0/0 1 141 MASQUERADE all -- * * 172.30.3.0/24 0.0.0.0/0 1 40 MASQUERADE all -- * * 172.30.4.0/24 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 36 packets, 2579 bytes) pkts bytes target prot opt in out source destination
That XML dump shows that neither VM is running. Please do it again when the VMs are actually running. Please also include the 'brctl show' and 'xenstore-ls' output while both VMs are running. Don't paste this into BZ - attach files, otherwise BZ mangles linebreaks on long lines.
Created attachment 311156 [details] virsh dumpxml test > ~/virsh_dumpxml_test.txt
Created attachment 311157 [details] virsh dumpxml rhel5 > ~/virsh_dumpxml_rhel5.txt
Created attachment 311158 [details] iptables -L -n -v > ~/iptables_-L_-n_-v.txt
Created attachment 311159 [details] iptables -t nat -L -n -v > iptables_-t_nat_-L_-n_-v.txt
Created attachment 311160 [details] brctl show > ~/brctl_show.txt
Created attachment 311161 [details] xenstore-ls > ~/xenstore-ls.txt
Created attachment 311162 [details] /etc/xen/test
Created attachment 311163 [details] /etc/xen/rhel5
Sorry about pasting into BZ. I'll go the attachment route moving forward. Output of requested commands are now uploaded as attachments. I made sure both VMs were running at the time when commands were run.
Ok, reproduced the problem. It is caused by the ordering of the iptables rules. I can't think of any easy way to solve this problem either without major re-organization of the way our iptables rules work
Thanks for the confirmation! For what it's worth, I do know that it worked as expected in RHEL 5.1. I'm not sure if we can leverage the effort in RHEL 5.1, or if the code differences between RHEL 5.1 and 5.2 are too different to be worth while. Just a suggestion.
Actually, I wasn't thinking straight when I wrote the previous message. The semantics of the virtual networking is that machines on the virtual network only have out-bound connectivity to the LAN. Machines on one virtual network should not be able to connect to another virtual network. So there is a bug here - traffic is supposed to be denied in *both* directions. The only intended inbound connectivity to guests is from the Dom0 I'm surprised you notice a difference on 5.1, because AFAIK we didn't change the rules used in 5.2
If you try this on a RHEL 5.1 host, traffic is allowed in both directions. This is actually a desired behavior, at least for me. Here's a potential use case... What I'd like to do is have two vnets. One vnet is my normal, generic vnet to which all VMs connect. I'd also like to have a second vnet which is a Satellite provisioning network. This second vnet has libvirt's built in DHCP turned off. My Satellite VM would have two virtual NICs and be connected to both vnets. The Satellite VM would run DHCP and tftp servers listening on the second vnet. I'd create a VM to PXE boot off of the second vnet. It would get an IP address for the second vnet via DHCP from the DHCP server running on the Satellite VM. It would pull down the kickstart file from the Satellite, and then do the installation using the installation url of satellite.example.com whose IP is associated with the Satellite's first virtual NIC connected to the first vnet. On RHEL 5.1, this worked great. On RHEL 5.2, the PXE VM on the second vnet was able to PXE boot and start the installer, but the installer failed when it tried to access satellite.example.com on the first vnet. Is there another, possibly better, way to achieve a similar result? Here are some alternatives that I came up with: - I could possibly turn off libvirt DHCP on the first vnet and run the dhcp server on dom0. That would work but kind of defeat the purpose of leveraging the power of libvirt however. - I could possibly add a second virtual NIC to the PXE VM and connect it to the first vnet. Maybe this would be the best option? - I could move satellite.example.com to the second vnet, but that would prevent the VMs on the first vnet from accessing it. My intuition says that there may be other use cases to allow VMs on separate vnets on the same host to speak w/each other. Taking a step back and thinking of my host as a "virtual server room", there would be cases where you want the hosts on separate vlans to not communicate, and other cases where you would. Could this at least be an optional setting? For instance, it could be an option similar to the libvirt virtual network option to allow the vnet to forward traffic to physical devices. Thanks for taking the time to help me!
So I wound up connecting a second virtual NIC to the PXE VM and connected it to the first vnet. That essentially addresses my original requirement for Satellite PXE provisioning as noted in comment #17. In other words, I'm able to do what I need by using multiple NICs in a VM connected to different vnets on the same host. The original way I addressed this requirement is to have separate VMs connected to separate vnets and expecting them to be able to communicate with each other across vnets. This worked great in RHEL 5.1, but only worked in one direction in RHEL 5.2. As noted in comment #16, this shouldn't be allowed to work at all in either direction. Anyhow, I'll leave it up to you to disable it in both directions, let it go for now, add an option to allow communication in both directions, or choose something else. I'm still intuitively thinking that allowing this capability as an option would be useful. I'll defer to your guidance however. Thank you again for all the help!
Laine, can you have a look and see why we're not denying traffic in both directions as per comment #16?
Hugh - I assume you're interested in behavior of current code, not the ancient stuff referenced here?
We are rebasing libvirt in 5.6; that may fix this problem as it contains updates for iptables support.
This request was evaluated by Red Hat Product Management for inclusion in the current release of Red Hat Enterprise Linux. Because the affected component is not scheduled to be updated in the current release, Red Hat is unfortunately unable to address this request at this time. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
This request was erroneously denied for the current release of Red Hat Enterprise Linux. The error has been fixed and this request has been re-proposed for the current release.
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.
*** Bug 1064963 has been marked as a duplicate of this bug. ***