Bug 643831 - Guest kernel panic during bonding test with e1000 nic
Summary: Guest kernel panic during bonding test with e1000 nic
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kvm
Version: 5.6
Hardware: All
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Michael S. Tsirkin
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 643577
Blocks: Rhel5KvmTier2 640580
TreeView+ depends on / blocked
 
Reported: 2010-10-18 09:06 UTC by juzhang
Modified: 2013-01-09 23:15 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 643577
Environment:
Last Closed: 2011-08-02 15:18:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
guest crash screendump (843.76 KB, application/octet-stream)
2010-10-18 09:14 UTC, juzhang
no flags Details
guest crash screendump (843.76 KB, image/x-portable-pixmap)
2010-10-18 09:17 UTC, juzhang
no flags Details

Description juzhang 2010-10-18 09:06:35 UTC
RHEL5.6 host hit this issue too.
#rpm -qa | grep kvm
kvm-qemu-img-83-204.el5
kvm-debuginfo-83-204.el5
etherboot-roms-kvm-5.4.4-13.el5
kvm-tools-83-204.el5
kmod-kvm-83-204.el5
etherboot-zroms-kvm-5.4.4-13.el5
kvm-83-204.el5



+++ This bug was initially created as a clone of Bug #643577 +++

Description of problem:
I try to boot up a rhel6 guest with 4 e1000 nics, and setup bonding in guest.
When I repeatedly down/up some interfaces, guest kernel panic.

Version-Release number of selected component (if applicable):
host kernel: 2.6.32-71.3.1.el6_0.x86_64
guest kernel: 2.6.32-70.el6.x86_64
# rpm -qa |grep qemu
qemu-kvm-debuginfo-0.12.1.2-2.113.el6_0.1.x86_64
gpxe-roms-qemu-0.9.7-6.3.el6.noarch
qemu-img-0.12.1.2-2.113.el6_0.1.x86_64
qemu-kvm-tools-0.12.1.2-2.113.el6_0.1.x86_64
qemu-kvm-0.12.1.2-2.113.el6_0.1.x86_64

How reproducible:
always

Steps to Reproduce:
1. Start guest with four nic models.
2. Setup bond0 in guest by script bonding_setup.py.
3. Run test.sh in guest to repeatedly down/up interfaces.
4. ping guest from host

Actual results:
guest kernel panic

Expected results:
the lost packet ratio of ping is 0.

Additional info:

1. scripts' content:

# cat bonding_setup.py
import os, re, commands
eth_nums = 0
for ename in ['eth0', 'eth1', 'eth2', 'eth3']:
    eth_config_file = "/etc/sysconfig/network-scripts/ifcfg-%s" % ename
    eth_config = """DEVICE=%s
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
""" % ename
    file(eth_config_file,'w').write(eth_config)
bonding_config_file = "/etc/sysconfig/network-scripts/ifcfg-bond0"
bond_config = """DEVICE=bond0
BOOTPROTO=dhcp
ONBOOT=yes
USERCTL=no
"""
file(bonding_config_file, "w").write(bond_config)
os.system("modprobe bonding")
os.system("service network restart")

# cat test.sh  
while true;do
ifconfig bond0 down;ifconfig bond0 up
ifconfig eth2 down;ifconfig eth2 up
ifconfig eth3 down;ifconfig eth3 up
ifconfig eth4 down;ifconfig eth4 up
done

2. qemu-kvm commandline:
# qemu-kvm -name 'vm1' -chardev socket,id=human_monitor_WU5C,path=/tmp/monitor-humanmonitor1-20101011-212704-VvdR,server,nowait -mon chardev=human_monitor_WU5C,mode=readline -chardev socket,id=serial_hThH,path=/tmp/serial-20101011-212704-VvdR,server,nowait -device isa-serial,chardev=serial_hThH -drive file='/home/devel/autotest-devel/client/tests/kvm/images/RHEL-Server-6.0-64-virtio.qcow2',index=0,if=none,id=drive-virtio-disk1,media=disk,cache=none,snapshot=on,boot=on,format=qcow2,aio=native -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk1,id=virtio-disk1 -device e1000,netdev=idOCJBNn,id=ndev00idOCJBNn,mac='02:A9:7C:6C:1a:90',bus=pci.0,addr=0x3 -netdev tap,id=idOCJBNn,ifname='e1000_0_8000',script='/home/devel/autotest-devel/client/tests/kvm/scripts/qemu-ifup-switch',downscript='no' -device e1000,netdev=id0zt7sU,id=ndev00id0zt7sU,mac='02:A9:7C:6C:54:f6',bus=pci.0,addr=0x5 -netdev tap,id=id0zt7sU,ifname='e1000_1_8000',script='/home/devel/autotest-devel/client/tests/kvm/scripts/qemu-ifup-switch',downscript='no' -device e1000,netdev=idjrlnQd,id=ndev00idjrlnQd,mac='02:A9:7C:6C:d6:6b',bus=pci.0,addr=0x6 -netdev tap,id=idjrlnQd,ifname='e1000_2_8000',script='/home/devel/autotest-devel/client/tests/kvm/scripts/qemu-ifup-switch',downscript='no' -device e1000,netdev=idVEUn79,id=ndev00idVEUn79,mac='02:A9:7C:6C:c5:69',bus=pci.0,addr=0x7 -netdev tap,id=idVEUn79,ifname='e1000_3_8000',script='/home/devel/autotest-devel/client/tests/kvm/scripts/qemu-ifup-switch',downscript='no' -m 2048 -smp 2 -cpu cpu64-rhel6,+sse2,+x2apic -vnc :0 -spice port=8000,disable-ticketing -vga qxl -rtc base=utc,clock=host,driftfix=none -M rhel6.0.0 -usbdevice tablet -no-kvm-pit-reinjection -enable-kvm 

3. call trace:
BUG: unable to handle kernel paging request at 00000001000000e4
IP: [<ffffffff8140552e>] consume_skb+0xe/0x40
PGD 7ac9e067 PUD 0 
Oops: 0000 [#1] SMP 
last sysfs file: /sys/devices/virtio-pci/virtio0/block/vda/dev
CPU 1 
Modules linked in: sit tunnel4 bonding virtio_balloon ipv6 dm_mirror dm_region_hash dm_log ppdev parport_pc parport e1000 i2c_piix4 i2c_core sg ext4 mbcache jbd2 virtio_blk sr_mod cdrom virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mod [last unloaded: speedstep_lib]

Modules linked in: sit tunnel4 bonding virtio_balloon ipv6 dm_mirror dm_region_hash dm_log ppdev parport_pc parport e1000 i2c_piix4 i2c_core sg ext4 mbcache jbd2 virtio_blk sr_mod cdrom virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mod [last unloaded: speedstep_lib]
Pid: 3669, comm: ifconfig Not tainted 2.6.32-70.el6.x86_64 #1 KVM
RIP: 0010:[<ffffffff8140552e>]  [<ffffffff8140552e>] consume_skb+0xe/0x40
RSP: 0018:ffff88007b22fc38  EFLAGS: 00010206
RAX: 0000000000000246 RBX: ffffc900028da028 RCX: 0000000000000000
RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000000100000000
RBP: ffff88007b22fc38 R08: ffffc90001245400 R09: 0000000000000002
R10: 00000000000000c5 R11: 000000000e000000 R12: ffff880079b406c0
R13: 0000000000000002 R14: 0000000000000001 R15: ffff88007965b600
FS:  00007ffa26c017a0(0000) GS:ffff880001f00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00000001000000e4 CR3: 000000007a6d3000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process ifconfig (pid: 3669, threadinfo ffff88007b22e000, task ffff88007a2180c0)
Stack:
 ffff88007b22fc48 ffffffff8141082d ffff88007b22fc68 ffffffffa0134a5a
<0> ffff880079b406c0 ffff88007ab462c0 ffff88007b22fc98 ffffffffa0134ac3
<0> ffff88007b22fc98 ffff880079b406c0 0000000000000001 ffff880079b40900
Call Trace:
 [<ffffffff8141082d>] dev_kfree_skb_any+0x3d/0x50
 [<ffffffffa0134a5a>] e1000_unmap_and_free_tx_resource+0x6a/0x90 [e1000]
 [<ffffffffa0134ac3>] e1000_clean_tx_ring+0x43/0xb0 [e1000]
 [<ffffffffa0138cd6>] e1000_down+0x1d6/0x230 [e1000]
 [<ffffffffa01397bb>] e1000_close+0x2b/0xe0 [e1000]
 [<ffffffff814112e1>] dev_close+0x71/0xc0
 [<ffffffff81410cd1>] dev_change_flags+0xa1/0x1d0
 [<ffffffff81470b2b>] devinet_ioctl+0x5eb/0x690
 [<ffffffff81471b78>] inet_ioctl+0x88/0xa0
 [<ffffffff813fcdaa>] sock_ioctl+0x7a/0x280
 [<ffffffff8117f182>] vfs_ioctl+0x22/0xa0
 [<ffffffff8117f324>] do_vfs_ioctl+0x84/0x580
 [<ffffffff8117f8a1>] sys_ioctl+0x81/0xa0
 [<ffffffff81013172>] system_call_fastpath+0x16/0x1b
Code: d0 01 00 00 0f 94 c0 84 c0 74 b9 48 8b 3d cb 0b 4a 00 48 89 de e8 23 22 d5 ff eb a8 90 55 48 89 e5 0f 1f 44 00 00 48 85 ff 74 10 <8b> 87 e4 00 00 00 83 f8 01 75 07 e8 42 ff ff ff c9 c3 f0 ff 8f 
RIP  [<ffffffff8140552e>] consume_skb+0xe/0x40
 RSP <ffff88007b22fc38>
CR2: 00000001000000e4
---[ end trace b08f37fcaed76457 ]---
Kernel panic - not syncing: Fatal exception

Message from syslogd@virtlab-66-85-153 at Oct 16 11:31:20 ...
 kerPneil:dOo:ps:  3669, comm: ifconfig Tainted: G      D    ----------------  2.6.32-70.el6.x86_64 #1
0000 [#1] SMP 

Message from syslogd@virtlab-66-85-153 at Oct 16 11:31:20 ...
 kernel:last sysfs file: /sys/devices/virtio-pci/virtio0/block/vda/dev

Message from syslogd@virtlab-66-85-153 at Oct 16 11:31:20 C.a.l.l
 ker Trace:
nel:Stack:

Message from syslogd@virtlab-66-85-153 at Oct 1 6 [<ffffffff814c7b23>] panic+0x78/0x137
11:31:20 ...
 kernel:Call Trace:

Message from syslogd@virtlab-66 -85-[153 at< ffffffff814cbbf4>] oops_end+0xe4/0x100
Oct 16 11:31:20 ...
 kernel:Code: d0 01 00 00 0f 94 c0 84 c0 74 b9 48  8b [3d <cbf fffffff8104651b>] no_context+0xfb/0x260
0b 4a 00 48 89 de e8 23 22 d5 ff eb a8 90 55 48 89 e5 0f 1f 44 00 00 48 85 ff 74 [<ffffffff810467a5>] __bad_area_nosemaphore+0x125/0x1e0
 10 <8b> 87 e4 00 00 00 83 f8 01 75 07 e8 42 ff ff ff c9 c3 f0 ff 8f 

Message fro m [sys<logfdf@vffffff814c8286>] ? thread_return+0x4e/0x778
irtlab-66-85-153 at Oct 16 11:31:20 ...
 kernel:CR2: 00000001000000e4

Messa [<ffffffff810468ce>] bad_area+0x4e/0x60
ge from syslogd@virtlab-66-85-153 at Oct 16 11:31:20 ...
 kernel:Ke rn[el< pafnifc f-fffff814cd740>] do_page_fault+0x390/0x3a0
 not syncing: Fatal exception
 [<ffffffff814caf45>] page_fault+0x25/0x30
 [<ffffffff8140552e>] ? consume_skb+0xe/0x40
 [<ffffffff8141082d>] dev_kfree_skb_any+0x3d/0x50
 [<ffffffffa0134a5a>] e1000_unmap_and_free_tx_resource+0x6a/0x90 [e1000]
 [<ffffffffa0134ac3>] e1000_clean_tx_ring+0x43/0xb0 [e1000]
 [<ffffffffa0138cd6>] e1000_down+0x1d6/0x230 [e1000]
 [<ffffffffa01397bb>] e1000_close+0x2b/0xe0 [e1000]
 [<ffffffff814112e1>] dev_close+0x71/0xc0
 [<ffffffff81410cd1>] dev_change_flags+0xa1/0x1d0
 [<ffffffff81470b2b>] devinet_ioctl+0x5eb/0x690
 [<ffffffff81471b78>] inet_ioctl+0x88/0xa0
 [<ffffffff813fcdaa>] sock_ioctl+0x7a/0x280
 [<ffffffff8117f182>] vfs_ioctl+0x22/0xa0
 [<ffffffff8117f324>] do_vfs_ioctl+0x84/0x580
 [<ffffffff8117f8a1>] sys_ioctl+0x81/0xa0
 [<ffffffff81013172>] system_call_fastpath+0x16/0x1b

Comment 2 juzhang 2010-10-18 09:14:27 UTC
Created attachment 454053 [details]
guest crash screendump

Comment 3 juzhang 2010-10-18 09:17:24 UTC
Created attachment 454054 [details]
guest crash screendump

Comment 4 Michael S. Tsirkin 2010-10-18 09:21:14 UTC
can you get kdump instead please we can analyse with crash?

Comment 8 juzhang 2010-10-18 11:45:53 UTC
(In reply to comment #4)
> can you get kdump instead please we can analyse with crash?

http://fileshare.englab.nay.redhat.com/pub/kvm/akong/vmcore-bz643831

snip form vmcore

#crash /usr/lib/debug/lib/modules/2.6.32-71.el6.x86_64/vmlinux vmcore


This GDB was configured as "x86_64-unknown-linux-gnu"...

      KERNEL: /usr/lib/debug/lib/modules/2.6.32-71.el6.x86_64/vmlinux
    DUMPFILE: vmcore  [PARTIAL DUMP]
        CPUS: 2
        DATE: Mon Oct 18 08:18:29 2010
      UPTIME: 00:03:59
LOAD AVERAGE: 0.30, 0.41, 0.21
       TASKS: 167
    NODENAME: dhcp-91-78.nay.redhat.com
     RELEASE: 2.6.32-71.el6.x86_64
     VERSION: #1 SMP Wed Sep 1 01:33:01 EDT 2010
     MACHINE: x86_64  (2826 Mhz)
      MEMORY: 4 GB
       PANIC: "Oops: 0000 [#1] SMP " (check log for details)
         PID: 8569
     COMMAND: "ifconfig"
        TASK: ffff88013809f520  [THREAD_INFO: ffff880139ac2000]
         CPU: 0
       STATE: TASK_RUNNING (PANIC)



crash> bt
PID: 8569   TASK: ffff88013809f520  CPU: 0   COMMAND: "ifconfig"
 #0 [ffff880028203a90] machine_kexec at ffffffff8103695b
 #1 [ffff880028203af0] crash_kexec at ffffffff810b8f08
 #2 [ffff880028203bc0] oops_end at ffffffff814cbbd0
 #3 [ffff880028203bf0] no_context at ffffffff8104651b
 #4 [ffff880028203c40] __bad_area_nosemaphore at ffffffff810467a5
 #5 [ffff880028203c90] bad_area_nosemaphore at ffffffff81046873
 #6 [ffff880028203ca0] do_page_fault at ffffffff814cd658
 #7 [ffff880028203cf0] page_fault at ffffffff814caf45
    [exception RIP: e1000_clean+274]
    RIP: ffffffffa0121442  RSP: ffff880028203da0  RFLAGS: 00010246
    RAX: ffff880139b5f000  RBX: 0000000000000000  RCX: ffff880139b5f000
    RDX: 0000000000000000  RSI: ffffc90001eae000  RDI: ffff880133a348f0
    RBP: ffff880028203e60   R8: ffff8800282141c0   R9: 0000000000000000
    R10: 000000000000803b  R11: ffffffff8172fa80  R12: 0000000000000000
    R13: ffff880137709e40  R14: 0000000000000000  R15: 0000000000000001
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #8 [ffff880028203e68] net_rx_action at ffffffff8140fe53
 #9 [ffff880028203ec8] __do_softirq at ffffffff81073bd7
#10 [ffff880028203f38] call_softirq at ffffffff810142cc
#11 [ffff880028203f50] do_softirq at ffffffff81015f35
#12 [ffff880028203f70] irq_exit at ffffffff810739d5
#13 [ffff880028203f80] do_IRQ at ffffffff814cf915
--- <IRQ stack> ---
#14 [ffff880139ac3c38] ret_from_intr at ffffffff81013ad3
    [exception RIP: e1000_open+239]
    RIP: ffffffffa01253ef  RSP: ffff880139ac3ce8  RFLAGS: 00010246
    RAX: 0000000000000000  RBX: ffff880139ac3d08  RCX: ffffc90001040000
    RDX: 0000000000000004  RSI: 0000000000000246  RDI: 0000000000000246
    RBP: ffffffff81013ace   R8: 0000000000000000   R9: 0000000000000026
    R10: 000000000000803b  R11: ffffffff8172fa80  R12: ffff880100000000
    R13: ffffffff8172fa80  R14: ffffffff810d9f64  R15: ffff880139ac3cb8
    ORIG_RAX: ffffffffffffffc4  CS: 0010  SS: 0018
#15 [ffff880139ac3d10] dev_open at ffffffff814115a1
#16 [ffff880139ac3d30] dev_change_flags at ffffffff81410cd1
#17 [ffff880139ac3d70] devinet_ioctl at ffffffff81470b2b
#18 [ffff880139ac3e20] inet_ioctl at ffffffff81471b78
#19 [ffff880139ac3e30] sock_ioctl at ffffffff813fcdaa
#20 [ffff880139ac3e60] vfs_ioctl at ffffffff8117f182
#21 [ffff880139ac3ea0] do_vfs_ioctl at ffffffff8117f324
#22 [ffff880139ac3f30] sys_ioctl at ffffffff8117f8a1
#23 [ffff880139ac3f80] system_call_fastpath at ffffffff81013172
    RIP: 00007f9af550d5f7  RSP: 00007fffd3e0a258  RFLAGS: 00010206
    RAX: 0000000000000010  RBX: ffffffff81013172  RCX: 0000000000000062
    RDX: 00007fffd3e0ad30  RSI: 0000000000008914  RDI: 0000000000000004
    RBP: 00007fffd3e0ae40   R8: 000000000000000a   R9: 000000000000000a
    R10: 00007fffd3e0aab0  R11: 0000000000000202  R12: 000000000060fc40
    R13: 00007fffd3e0b020  R14: 0000000000000041  R15: 00007fffd3e0ae40
    ORIG_RAX: 0000000000000010  CS: 0033  SS: 002b

Comment 9 Dor Laor 2010-10-20 10:59:28 UTC
I lowered the priority since it is not a common use case.
Does all the host backend tap are connected to different hosts? Make sure there are no l2 loops.

Comment 10 juzhang 2010-10-21 02:53:08 UTC
(In reply to comment #9)
> I lowered the priority since it is not a common use case.
> Does all the host backend tap are connected to different hosts? Make sure there
> are no l2 loops.
Just one host,boot a guest with 4 virtual e1000 nics,then bonding testing,the following is details CML on rhel5.6 host.

RHEL5.6 CML:
/usr/libexec/qemu-kvm -no-hpet -usbdevice tablet -rtc-td-hack -m 4G -smp 2 -monitor stdio -drive file=/root/zhangjunyi/rhel6.0_64.qcow2,if=virtio,boot=on,werror=stop -drive file=/root/zhangjunyi/boot.iso,media=cdrom -fda /usr/share/virtio-win/virtio-drivers-1.0.0-45801-1.0.0.vfd -net nic,vlan=0,macaddr=22:11:22:45:66:83,model=e1000 -net tap,vlan=0,script=/etc/qemu-ifup -uuid `uuidgen` -cpu qemu64,+sse2 -balloon none -boot c  -vnc :10 -notify all -net nic,macaddr=10:10:20:34:23:13,model=e1000,vlan=1 -net tap,script=/etc/qemu-ifup,vlan=1 -net nic,macaddr=10:10:20:34:23:14,model=e1000,vlan=2 -net tap,script=/etc/qemu-ifup,vlan=2 -net nic,macaddr=10:10:20:34:23:15,model=e1000,vlan=3 -net tap,script=/etc/qemu-ifup,vlan=3

Comment 11 Dor Laor 2010-10-21 09:18:45 UTC
MY questions is are all those tap connected to the same bridge on the host?
My hunch that they are and that's wrong.
What's the content of /etc/qemu-ifup and the output of brctl show

Comment 12 juzhang 2010-10-21 10:12:34 UTC
(In reply to comment #11)
> MY questions is are all those tap connected to the same bridge on the host?
> My hunch that they are and that's wrong.
> What's the content of /etc/qemu-ifup and the output of brctl show

On host
#brctl show
bridge name	bridge id		STP enabled	interfaces
breth0		8000.0023ae7a6f2e	no		tap3
							tap2
							tap1
							tap0
							eth0
virbr0		8000.000000000000	yes	

#cat /etc/qemu-ifup
#!/bin/sh
switch=breth0
/sbin/ifconfig $1 0.0.0.0 up
/usr/sbin/brctl addif ${switch} $1


#ifconfig
breth0    Link encap:Ethernet  HWaddr 00:23:AE:7A:6F:2E  
          inet addr:10.66.91.91  Bcast:10.66.91.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:22150182 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4592948 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:30034121520 (27.9 GiB)  TX bytes:2749706908 (2.5 GiB)

eth0      Link encap:Ethernet  HWaddr 00:23:AE:7A:6F:2E  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:23444522 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6024239 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:31896407961 (29.7 GiB)  TX bytes:2888201680 (2.6 GiB)
          Interrupt:58 Memory:febe0000-fec00000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:1046954 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1046954 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1425116516 (1.3 GiB)  TX bytes:1425116516 (1.3 GiB)

tap0      Link encap:Ethernet  HWaddr 16:79:FD:23:5B:3B  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:332 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:3049 (2.9 KiB)  TX bytes:56619 (55.2 KiB)

tap1      Link encap:Ethernet  HWaddr C2:5F:7A:7A:F0:30  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:339 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:2925 (2.8 KiB)  TX bytes:56743 (55.4 KiB)

tap2      Link encap:Ethernet  HWaddr 9A:CD:EE:B9:2C:81  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:338 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:3141 (3.0 KiB)  TX bytes:56217 (54.8 KiB)

tap3      Link encap:Ethernet  HWaddr 8A:6F:A3:F6:54:37  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:339 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:2731 (2.6 KiB)  TX bytes:56565 (55.2 KiB)

virbr0    Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:32529 errors:0 dropped:0 overruns:0 frame:0
          TX packets:294001 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1824078 (1.7 MiB)  TX bytes:432896202 (412.8 MiB

Comment 13 Dor Laor 2010-10-21 10:34:29 UTC
Ok, it looks fine, thanks for the info

Comment 15 RHEL Program Management 2011-01-11 20:16:40 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated in the
current release, Red Hat is unfortunately unable to address this
request at this time. Red Hat invites you to ask your support
representative to propose this request, if appropriate and relevant,
in the next release of Red Hat Enterprise Linux.

Comment 16 RHEL Program Management 2011-01-11 22:53:51 UTC
This request was erroneously denied for the current release of
Red Hat Enterprise Linux.  The error has been fixed and this
request has been re-proposed for the current release.


Note You need to log in before you can comment on or make changes to this bug.