Bug 902632 - boot guest with 200 virtserialport (20 virtio-serial-pci), one or more virtserialport absence
boot guest with 200 virtserialport (20 virtio-serial-pci), one or more virtse...
Status: CLOSED WORKSFORME
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
7.0
All Unspecified
medium Severity medium
: rc
: ---
Assigned To: Amit Shah
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-01-22 01:20 EST by yunpingzheng
Modified: 2014-03-03 19:13 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-21 04:59:48 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
call trace file (302.45 KB, text/plain)
2013-11-20 06:59 EST, yunpingzheng
no flags Details
boot_start_script (3.36 KB, application/x-sh)
2013-11-20 07:01 EST, yunpingzheng
no flags Details

  None (edit)
Description yunpingzheng 2013-01-22 01:20:24 EST
Description of problem:
when boot guest with 200 virtserialport (20 virtio-serial-pci, 2 virtserialport on each bus). one or more virtioserialport  absence(some times 1 virtserialport, some times more than 4).

during guest boot monitor will report error like:

qemu-kvm: virtio-serial-bus: Unexpected port id 1304861984 for device virt-serial-3.0
qemu-kvm: virtio-serial-bus: Unexpected port id 1304829216 for device virt-serial-8.0
qemu-kvm: virtio-serial-bus: Unexpected port id 1304829216 for device virt-serial-11.0
qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24
qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace (slower).
qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24
...
qemu-kvm: virtio-serial-bus: Unexpected port id 1304845600 for device virt-serial-20.0

Version-Release number of selected component (if applicable):
Host rhel7: 
kernel-3.7.0-0.31.el7.x86_64
qemu-img-1.3.0-3.el7.x86_64
guest:
rhel6.4
kernel:  kernel-2.6.32-355.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. boot guest with 200 virtioserialport(20 virtio-serial-pci, 2 virtioserialport on each bus)

2. during guest boot qemu will report error like:
  qemu-kvm: virtio-serial-bus: Unexpected port id 499473696 for device virt-serial-3.0

3. in guest some serialport on the bus that report error absence


  
Actual results:
some virtserialport absence
  
Expected results:
all virtserialport should exist


Additional info:

qemu-cmd:
/usr/bin/qemu-kvm \
-name 'vm1' \
-nodefaults \
-m 4096 \
-smp 4,cores=2,threads=1,sockets=2 \
-vnc :22 \
-vga std \
-rtc base=utc,clock=host,driftfix=none \
-drive file=/root/qemu_kvm/RHEL-Server-6.4-64-virtio.qcow2,if=none,cache=none,id=virtio0 \
-device virtio-blk-pci,drive=virtio0 \
-device virtio-net-pci,netdev=id3Ibo2c,mac=9a:5e:5f:60:61:62 \
-netdev tap,id=id3Ibo2c,script=/root/qemu_kvm/qemu-ifup-switch \
-device ich9-usb-uhci1,id=usb1 \
-boot order=cdn,once=c,menu=off \
-enable-kvm \
-monitor stdio \
-device virtio-serial,id=virt-serial-1,max_ports=31,bus=pci.0 \
-chardev socket,id=virtio-serial-1-1,path=/tmp/virtio-serial-1-1,server,nowait \
-device virtserialport,chardev=virtio-serial-1-1,name=virtio.serial.1.1,bus=virt-serial-1.0,id=virtio-serial-port1-1 \
-chardev socket,id=virtio-serial-1-2,path=/tmp/virtio-serial-1-2,server,nowait \
-device virtserialport,chardev=virtio-serial-1-2,name=virtio.serial.1.2,bus=virt-serial-1.0,id=virtio-serial-port1-2 \
-device virtio-serial,id=virt-serial-2,max_ports=31,bus=pci.0 \
-chardev socket,id=virtio-serial-2-1,path=/tmp/virtio-serial-2-1,server,nowait \
-device virtserialport,chardev=virtio-serial-2-1,name=virtio.serial.2.1,bus=virt-serial-2.0,id=virtio-serial-port2-1 \
-chardev socket,id=virtio-serial-2-2,path=/tmp/virtio-serial-2-2,server,nowait \
-device virtserialport,chardev=virtio-serial-2-2,name=virtio.serial.2.2,bus=virt-serial-2.0,id=virtio-serial-port2-2 \

....

-device virtio-serial,id=virt-serial-20,max_ports=31,bus=pci.0 \
-chardev socket,id=virtio-serial-20-1,path=/tmp/virtio-serial-20-1,server,nowait \
-device virtserialport,chardev=virtio-serial-20-1,name=virtio.serial.20.1,bus=virt-serial-20.0,id=virtio-serial-port20-1 \
-chardev socket,id=virtio-serial-20-2,path=/tmp/virtio-serial-20-2,server,nowait \
-device virtserialport,chardev=virtio-serial-20-2,name=virtio.serial.20.2,bus=virt-serial-20.0,id=virtio-serial-port20-2


Host cpu
[root@localhost ~]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 42
Stepping:              7
CPU MHz:               1600.000
BogoMIPS:              6784.43
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0-7
Comment 1 Amit Shah 2013-11-14 08:02:40 EST
Can you please try with latest qemu and guest kernel?

Also, you mention 200 ports, but you're using 20 devices with 2 ports each, which is only 40 ports.  Unless you're using 10 ports per device, the count won't go till 200 ports.  Please clarify.
Comment 2 yunpingzheng 2013-11-20 06:58:31 EST
Hi Amit
when i using the newest tree, boot the guest with multiple virtio serial and virtio console. the guest will call trace, during guest boot  hmp monitor will report error like:

qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24
qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace (slower).


host: 
    kernel:  kernel-3.10.0-50.el7.x86_64
    qemu:    qemu-kvm-1.5.3-19.el7.x86_64
guest rhel7
    kernel: kernel-3.10.0-50.el7.x86_64
Comment 3 yunpingzheng 2013-11-20 06:59:48 EST
Created attachment 826587 [details]
call  trace file

the call trace info in attachment
Comment 4 yunpingzheng 2013-11-20 07:01:04 EST
Created attachment 826589 [details]
boot_start_script

the boot script in attachment
Comment 5 Amit Shah 2013-11-21 02:29:21 EST
(In reply to yunpingzheng from comment #2)
> Hi Amit
> when i using the newest tree, boot the guest with multiple virtio serial and
> virtio console. the guest will call trace, during guest boot  hmp monitor
> will report error like:

The guest calltrace is due to hvc.  Can you only check with virtserialports, and no virtconsole ports?
Comment 6 yunpingzheng 2013-11-21 03:30:46 EST
Hi Amit
boot guest only with virtserialports guest works ok, but during guest boot still
throw message link:

qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24
qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace (slower).
Comment 7 yunpingzheng 2013-11-21 03:31:56 EST
typo: link --> like
Comment 8 Amit Shah 2013-11-21 03:48:21 EST
(In reply to yunpingzheng from comment #6)
> Hi Amit
> boot guest only with virtserialports guest works ok, but during guest boot
> still
> throw message link:

Do all 200 (or 40?) ports work fine?  Can you check you are really using 200 ports?

> qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event
> notifier: -24
> qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace
> (slower).

This could be due to the large number of ports.  Does reducing the number help?
Comment 9 yunpingzheng 2013-11-21 04:44:55 EST
(In reply to Amit Shah from comment #8)
> (In reply to yunpingzheng from comment #6)
> > Hi Amit
> > boot guest only with virtserialports guest works ok, but during guest boot
> > still
> > throw message link:
> 
> Do all 200 (or 40?) ports work fine?  Can you check you are really using 200
> ports?
> 
yes, all 200 serial ports works find. 
in host run:
     for i in `seq 20`; do for j in `seq 10`; do echo `echo $j+ $i*10|bc` | nc -U virtio-serial-$i-$j; done;done

in guest run:
    while true;  do  for i in `seq 20`; do for j in `seq 10`; do cat virtio.serial.$i.$j;true;   done;  done; done

all ports works.

> > qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event
> > notifier: -24
> > qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace
> > (slower).
> 
> This could be due to the large number of ports.  Does reducing the number
> help?
when i boot guest with 10 serial bus, each bus have 20 port, will not throw the message.
when i boot guest whit 20 serial bus, each bus have only 1 port, will throw the message.

seems reduce the bus num help
Comment 10 Amit Shah 2013-11-21 04:59:48 EST
(In reply to yunpingzheng from comment #9)
> (In reply to Amit Shah from comment #8)
> > (In reply to yunpingzheng from comment #6)
> > > Hi Amit
> > > boot guest only with virtserialports guest works ok, but during guest boot
> > > still
> > > throw message link:
> > 
> > Do all 200 (or 40?) ports work fine?  Can you check you are really using 200
> > ports?
> > 
> yes, all 200 serial ports works find. 
> in host run:
>      for i in `seq 20`; do for j in `seq 10`; do echo `echo $j+ $i*10|bc` |
> nc -U virtio-serial-$i-$j; done;done
> 
> in guest run:
>     while true;  do  for i in `seq 20`; do for j in `seq 10`; do cat
> virtio.serial.$i.$j;true;   done;  done; done
> 
> all ports works.

OK, good.  You can open a new bug with the coredump you saw.  Pls open against the kernel package.

> > > qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event
> > > notifier: -24
> > > qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace
> > > (slower).
> > 
> > This could be due to the large number of ports.  Does reducing the number
> > help?
> when i boot guest with 10 serial bus, each bus have 20 port, will not throw
> the message.
> when i boot guest whit 20 serial bus, each bus have only 1 port, will throw
> the message.
> 
> seems reduce the bus num help

Yes, -24 is EMFILE, which means too many open files accessing that one resource.  Reducing buses will help.  Since everything continues to work fine even then (just not as fast), and it's a case that no one will really use (even 3 virtio-serial-buses sound like overkill), we can safely close this bug.

Note You need to log in before you can comment on or make changes to this bug.