RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1283508 - qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio
Summary: qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.2
Hardware: ppc64le
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: jason wang
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: RHEV4.0PPC RHV4.1PPC
TreeView+ depends on / blocked
 
Reported: 2015-11-19 07:59 UTC by Shuang Yu
Modified: 2016-07-25 14:18 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-01 05:39:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Shuang Yu 2015-11-19 07:59:15 UTC
Description of problem:
Boot up the guest with multiple queues and vhost=on,queues=smp.

when the queues>111,the qemu will prompt:
(qemu) 2015-11-19T05:05:11.448028Z qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24
vhost VQ 1 notifier binding failed: 24
2015-11-19T05:05:11.456491Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio

And when the queues>143,the qemu will prompt:
(qemu) 2015-11-19T02:50:29.903355Z qemu-kvm: Error binding guest notifier: 24
2015-11-19T02:50:29.903432Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio

Version-Release number of selected component (if applicable):
kernel-3.10.0-327.el7.test.ppc64le
qemu-kvm-rhev-2.3.0-31.el7_2.2.ppc64le
SLOF-20150313-5.gitc89b0df.el7.noarch

How reproducible:
100%

Steps to Reproduce:
1.Boot up the guest with multiple queues and vhost=on,queues=smp=111:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=224,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=111 -smp 111

2.Boot up the guest with multiple queues and vhost=on,queues=smp=112:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=226,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=112 -smp 112

3.Boot up the guest with multiple queues and vhost=on,queues=smp=144:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=290,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=144 -smp 144



Actual results:
After step1,the guest boot up successful and qemu no error prompt:
(qemu) 

After step2,the guest boot up successful and can scp file from guest to host or from host to guest,but the qemu prompt:

(qemu) 2015-11-19T05:13:42.506173Z qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24
vhost VQ 1 notifier binding failed: 24
2015-11-19T05:13:42.552121Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio

After step3,the guest  boot up successful and can scp file from guest to host or from host to guest,but the qemu prompt:

(qemu) 2015-11-18T09:41:33.654563Z qemu-kvm: Error binding guest notifier: 24
2015-11-18T09:41:33.654635Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio
2015-11-18T09:43:59.275014Z qemu-kvm: Error binding guest notifier: 24
2015-11-18T09:43:59.275054Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio



Expected results:
The guest can boot up successful and no error in qemu

Additional info:
Host:
# lscpu
Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                160


Guest cmd:
# /usr/libexec/qemu-kvm -name 7.2le -machine pseries,accel=kvm,usb=off -m 8G -realtime mlock=off -uuid 515298d8-62a2-434c-8ce9-3640c72a1596 -monitor stdio -vga std -rtc base=utc -msg timestamp=on -usb -device usb-tablet,id=tablet1 -device spapr-vscsi,id=scsi0,reg=0x1000 -drive file=RHEL-7.2-20151030.0-Server-ppc64le-dvd1.iso,format=raw,cache=none,if=none,id=drive-scsi0 -device scsi-cd,bus=scsi0.0,drive=drive-scsi0,id=scsi0-0,bootindex=2 -drive file=RHEL-7.2-20151030.0-Server-ppc64le.qcow2,format=qcow2,if=none,id=drive-image,cache=none -device virtio-blk-pci,id=image,drive=drive-image,bootindex=1 -vnc 0:1 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=226,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=112 -smp 112

Comment 2 Thomas Huth 2016-01-12 12:06:06 UTC
Looks similar to bug 1271060 : The "-24" means EMFILE - "Too many open files". Does it work if you increase the maximum possible number of open file descriptors (with "ulimit -n") ?

Comment 3 Shuang Yu 2016-01-13 05:37:55 UTC
Increase the maximum possible number of open file descriptors with "ulimit -n 10240" and test this issue again,all the guest can boot up successful and no error in the qemu.

Host version:
kernel-3.10.0-338.el7.ppc64le
qemu-kvm-rhev-2.3.0-31.el7_2.5.ppc64le
SLOF-20150313-5.gitc89b0df.el7.noarch

Steps:
1.Increase the maximum possible number of open file descriptors with "ulimit -n 10240"

# ulimit -n
1024

# ulimit -n 10240

# ulimit -n
10240

2.Boot up the guest with multiple queues and vhost=on,queues=smp=111:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=224,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=111 -smp 111

3.Boot up the guest with multiple queues and vhost=on,queues=smp=112:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=226,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=112 -smp 112

4.Boot up the guest with multiple queues and vhost=on,queues=smp=144:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=290,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=144 -smp 144


Actual result:
All the guest can boot up successful and no error in qemu.

Comment 5 Thomas Huth 2016-02-22 09:44:08 UTC
Ok, so how should we proceed here? Would it be acceptable to simply improve the error message with a hint that the number of possible open file descriptors should be increased with "ulimit -n"?

Comment 6 jason wang 2016-07-01 05:39:06 UTC
I think libvirt should increase the limitation for us here. Not a qemu-kvm's bug, so I close this as NTOABUG.


Note You need to log in before you can comment on or make changes to this bug.