Description of problem:
needed to run a RHEL4.8 instance on newer, unsupported hardware, decided to virtualize it. Set up a RHEL5.5 KVM hypervisor host, setup up two bridge devices, br0 connected to eth0, for regular tcp/ip access to the system, and br1 connected to eth3 to allow the guest to do network sniffing. The hardware was an HP ProLiant DL385 G7 with Broadcom network cards, using the bnx2 driver.
Guest was installed and configured to use the virtio-net for both bridged connections. Tcpdump on the monitor connection (br1) did not return expected traffic, only a few mcast packets.
tried virtualbox, using same bridge connections and was able to get tcpdump packets as expected.
tried the kvm guest again with the e1000 driver, didn't work. tried again with the ne2k_pci driver and were able to get correct tcpdump data.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. install a RHEL5.5 with KVM as hypervisor
2. setup up two bridged connections, one without an IP address
3. install a RHEL4.8 guest, using two virtio net connections.
4. try and sniff traffic on the un-IP assigned port. not all packets will be seen.
5. switching the second connection to ne2k_pci hardware allows tcpdump to connect accurate packets.
Would you please provide kvm version via "rpm -qa | grep kvm"?
version is 83-164.el5
update on this issue, the ne2k_pci driver is better, but still only seeing about half the packets we see on the raw interface, or what is seen in virtualbox.
I think tcpdump on the bridge is not what you want.
Since you say br1, is this what you do?
I think this only dumps packets destined at the bridge.
tcpdump should be done on the tap interface.
If what you do is tcpdump -i <bridge>
I'm surprised ne2k_pci somehow makes a difference.
Could you pls supply
-command line for qemu with ne2k_pci and with e1000
-ifconfig -a; brctl show; brctl showmacs br0; brctl showmacs br1 on host
-ifconfig -a on guest in both cases
correction to prev post: virtio-net, not e1000 as that
is what the question is about.
unfortunately, I don't have access to the system to run further tests, as we went into production with the virtualbox solution. I'll try and see if I can get some time on the qual system reconfigure to test. but some clarifications:
we run tcpdump inside the virtual guest, which shows up as eth1, but on the hypervisor is eth3 bridged as br1.
ne2k_pci and virtio were configured via virt-manager, where should I look for the qemu commandline?
qemu command line can be found at /var/log/libvirt/qemu/
to Comment 7:
ps -elf will give you the command line
tcpdump in *guest* does not give you any data?
but networking works, right?
it just doesn't show up in tcpdump?
Could you help to check if this bug can be reproduced with the latest kvm in your localhost? Please try to answer the questions in Comment #10. Thanks
this bug cann't be reproduced with e1000 and ne2k_pci nics.
# uname -r
# rpm -qa|grep kvm
1. use two physical Nics to create two bridges(switch switch1) devices in host and don't assign ip address to bridge switch1
2. start guest with e1000 or ne2k_pci
/usr/libexec/qemu-kvm -M rhel5.5.0 -m 4096 -smp 4,sockets=4,cores=1,threads=1 -name RHEL5u7 -uuid 13bd47ff-7458-a214-9c43-d311ed5ca5a3 -monitor stdio -no-kvm-pit-reinjection -boot c -drive file=test,if=virtio,format=qcow2,cache=none,boot=on -net nic,macaddr=54:52:00:52:ed:62,vlan=0,model=e1000 -net tap,script=/etc/qemu-ifup,downscript=no,vlan=0 -net nic,macaddr=00:1B:21:66:2A:63,vlan=1,model=e1000 -net tap,script=/etc/qemu-ifup1,downscript=no,vlan=1 -vnc :1 -balloon none -notify all -no-hpet -soundhw ac97
3. execute command "ping -I eth0 getway-ip" from guest and "tcpdump -i switch" from host
bridge switch can get ICMP pachage from guest.
IP 10.66.11.123 > 10.66.11.254: ICMP echo request, id 25627, seq 19, length 64
4. execute command "ping -I eth1 getway-ip" from guest and "tcpdump -i switch1" from host
bridge switch1 can get ICMP package from guest too.
IP 10.66.11.234 > 10.66.11.254: ICMP echo request, id 25883, seq 20, length 64
notes: 1.if use ne2k_pci Nic to start guest, it will get the same result
2.bridge switch and switch1 info
switch Link encap:Ethernet HWaddr 00:24:21:7F:B7:F9
inet addr:10.66.9.97 Bcast:10.66.11.255 Mask:255.255.252.0
inet6 addr: fe80::224:21ff:fe7f:b7f9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:778338 errors:0 dropped:0 overruns:0 frame:0
TX packets:984733 errors:0 dropped:0 overruns:0 carrier:0
RX bytes:48241888 (46.0 MiB) TX bytes:700105641 (667.6 MiB)
switch1 Link encap:Ethernet HWaddr 00:1B:21:66:2A:ED
inet6 addr: fe80::21b:21ff:fe66:2aed/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:124814 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
RX bytes:6516501 (6.2 MiB) TX bytes:3500 (3.4 KiB)
Confirmed with FuXiangChun, this bz also could not reproduce with virtio nic. I also got the same result as comment #12. so close this bug as WORKSFORME.