Bug 902632
Summary: | boot guest with 200 virtserialport (20 virtio-serial-pci), one or more virtserialport absence | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | yunpingzheng <yunzheng> | ||||||
Component: | qemu-kvm | Assignee: | Amit Shah <amit.shah> | ||||||
Status: | CLOSED WORKSFORME | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 7.0 | CC: | acathrow, flang, hhuang, juzhang, michen, qzhang, rhod, sluo, virt-maint, yunzheng | ||||||
Target Milestone: | rc | ||||||||
Target Release: | --- | ||||||||
Hardware: | All | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2013-11-21 09:59:48 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
yunpingzheng
2013-01-22 06:20:24 UTC
Can you please try with latest qemu and guest kernel? Also, you mention 200 ports, but you're using 20 devices with 2 ports each, which is only 40 ports. Unless you're using 10 ports per device, the count won't go till 200 ports. Please clarify. Hi Amit when i using the newest tree, boot the guest with multiple virtio serial and virtio console. the guest will call trace, during guest boot hmp monitor will report error like: qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24 qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace (slower). host: kernel: kernel-3.10.0-50.el7.x86_64 qemu: qemu-kvm-1.5.3-19.el7.x86_64 guest rhel7 kernel: kernel-3.10.0-50.el7.x86_64 Created attachment 826587 [details]
call trace file
the call trace info in attachment
Created attachment 826589 [details]
boot_start_script
the boot script in attachment
(In reply to yunpingzheng from comment #2) > Hi Amit > when i using the newest tree, boot the guest with multiple virtio serial and > virtio console. the guest will call trace, during guest boot hmp monitor > will report error like: The guest calltrace is due to hvc. Can you only check with virtserialports, and no virtconsole ports? Hi Amit boot guest only with virtserialports guest works ok, but during guest boot still throw message link: qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24 qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace (slower). typo: link --> like (In reply to yunpingzheng from comment #6) > Hi Amit > boot guest only with virtserialports guest works ok, but during guest boot > still > throw message link: Do all 200 (or 40?) ports work fine? Can you check you are really using 200 ports? > qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event > notifier: -24 > qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace > (slower). This could be due to the large number of ports. Does reducing the number help? (In reply to Amit Shah from comment #8) > (In reply to yunpingzheng from comment #6) > > Hi Amit > > boot guest only with virtserialports guest works ok, but during guest boot > > still > > throw message link: > > Do all 200 (or 40?) ports work fine? Can you check you are really using 200 > ports? > yes, all 200 serial ports works find. in host run: for i in `seq 20`; do for j in `seq 10`; do echo `echo $j+ $i*10|bc` | nc -U virtio-serial-$i-$j; done;done in guest run: while true; do for i in `seq 20`; do for j in `seq 10`; do cat virtio.serial.$i.$j;true; done; done; done all ports works. > > qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event > > notifier: -24 > > qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace > > (slower). > > This could be due to the large number of ports. Does reducing the number > help? when i boot guest with 10 serial bus, each bus have 20 port, will not throw the message. when i boot guest whit 20 serial bus, each bus have only 1 port, will throw the message. seems reduce the bus num help (In reply to yunpingzheng from comment #9) > (In reply to Amit Shah from comment #8) > > (In reply to yunpingzheng from comment #6) > > > Hi Amit > > > boot guest only with virtserialports guest works ok, but during guest boot > > > still > > > throw message link: > > > > Do all 200 (or 40?) ports work fine? Can you check you are really using 200 > > ports? > > > yes, all 200 serial ports works find. > in host run: > for i in `seq 20`; do for j in `seq 10`; do echo `echo $j+ $i*10|bc` | > nc -U virtio-serial-$i-$j; done;done > > in guest run: > while true; do for i in `seq 20`; do for j in `seq 10`; do cat > virtio.serial.$i.$j;true; done; done; done > > all ports works. OK, good. You can open a new bug with the coredump you saw. Pls open against the kernel package. > > > qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event > > > notifier: -24 > > > qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace > > > (slower). > > > > This could be due to the large number of ports. Does reducing the number > > help? > when i boot guest with 10 serial bus, each bus have 20 port, will not throw > the message. > when i boot guest whit 20 serial bus, each bus have only 1 port, will throw > the message. > > seems reduce the bus num help Yes, -24 is EMFILE, which means too many open files accessing that one resource. Reducing buses will help. Since everything continues to work fine even then (just not as fast), and it's a case that no one will really use (even 3 virtio-serial-buses sound like overkill), we can safely close this bug. |