Bug 1283508 - qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio
qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev (Show other bugs)
7.2
ppc64le Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: jason wang
Virtualization Bugs
:
Depends On:
Blocks: RHV4.1PPC RHEV4.0PPC
  Show dependency treegraph
 
Reported: 2015-11-19 02:59 EST by Shuang Yu
Modified: 2016-07-25 10:18 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-07-01 01:39:06 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shuang Yu 2015-11-19 02:59:15 EST
Description of problem:
Boot up the guest with multiple queues and vhost=on,queues=smp.

when the queues>111,the qemu will prompt:
(qemu) 2015-11-19T05:05:11.448028Z qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24
vhost VQ 1 notifier binding failed: 24
2015-11-19T05:05:11.456491Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio

And when the queues>143,the qemu will prompt:
(qemu) 2015-11-19T02:50:29.903355Z qemu-kvm: Error binding guest notifier: 24
2015-11-19T02:50:29.903432Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio

Version-Release number of selected component (if applicable):
kernel-3.10.0-327.el7.test.ppc64le
qemu-kvm-rhev-2.3.0-31.el7_2.2.ppc64le
SLOF-20150313-5.gitc89b0df.el7.noarch

How reproducible:
100%

Steps to Reproduce:
1.Boot up the guest with multiple queues and vhost=on,queues=smp=111:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=224,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=111 -smp 111

2.Boot up the guest with multiple queues and vhost=on,queues=smp=112:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=226,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=112 -smp 112

3.Boot up the guest with multiple queues and vhost=on,queues=smp=144:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=290,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=144 -smp 144



Actual results:
After step1,the guest boot up successful and qemu no error prompt:
(qemu) 

After step2,the guest boot up successful and can scp file from guest to host or from host to guest,but the qemu prompt:

(qemu) 2015-11-19T05:13:42.506173Z qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24
vhost VQ 1 notifier binding failed: 24
2015-11-19T05:13:42.552121Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio

After step3,the guest  boot up successful and can scp file from guest to host or from host to guest,but the qemu prompt:

(qemu) 2015-11-18T09:41:33.654563Z qemu-kvm: Error binding guest notifier: 24
2015-11-18T09:41:33.654635Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio
2015-11-18T09:43:59.275014Z qemu-kvm: Error binding guest notifier: 24
2015-11-18T09:43:59.275054Z qemu-kvm: unable to start vhost net: 24: falling back on userspace virtio



Expected results:
The guest can boot up successful and no error in qemu

Additional info:
Host:
# lscpu
Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                160


Guest cmd:
# /usr/libexec/qemu-kvm -name 7.2le -machine pseries,accel=kvm,usb=off -m 8G -realtime mlock=off -uuid 515298d8-62a2-434c-8ce9-3640c72a1596 -monitor stdio -vga std -rtc base=utc -msg timestamp=on -usb -device usb-tablet,id=tablet1 -device spapr-vscsi,id=scsi0,reg=0x1000 -drive file=RHEL-7.2-20151030.0-Server-ppc64le-dvd1.iso,format=raw,cache=none,if=none,id=drive-scsi0 -device scsi-cd,bus=scsi0.0,drive=drive-scsi0,id=scsi0-0,bootindex=2 -drive file=RHEL-7.2-20151030.0-Server-ppc64le.qcow2,format=qcow2,if=none,id=drive-image,cache=none -device virtio-blk-pci,id=image,drive=drive-image,bootindex=1 -vnc 0:1 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=226,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=112 -smp 112
Comment 2 Thomas Huth 2016-01-12 07:06:06 EST
Looks similar to bug 1271060 : The "-24" means EMFILE - "Too many open files". Does it work if you increase the maximum possible number of open file descriptors (with "ulimit -n") ?
Comment 3 Shuang Yu 2016-01-13 00:37:55 EST
Increase the maximum possible number of open file descriptors with "ulimit -n 10240" and test this issue again,all the guest can boot up successful and no error in the qemu.

Host version:
kernel-3.10.0-338.el7.ppc64le
qemu-kvm-rhev-2.3.0-31.el7_2.5.ppc64le
SLOF-20150313-5.gitc89b0df.el7.noarch

Steps:
1.Increase the maximum possible number of open file descriptors with "ulimit -n 10240"

# ulimit -n
1024

# ulimit -n 10240

# ulimit -n
10240

2.Boot up the guest with multiple queues and vhost=on,queues=smp=111:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=224,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=111 -smp 111

3.Boot up the guest with multiple queues and vhost=on,queues=smp=112:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=226,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=112 -smp 112

4.Boot up the guest with multiple queues and vhost=on,queues=smp=144:
/usr/libexec/qemu-kvm...-device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:54:5a:52:5b:5a,vectors=290,mq=on -netdev tap,id=hostnet1,script=/etc/qemu-ifup,vhost=on,queues=144 -smp 144


Actual result:
All the guest can boot up successful and no error in qemu.
Comment 5 Thomas Huth 2016-02-22 04:44:08 EST
Ok, so how should we proceed here? Would it be acceptable to simply improve the error message with a hint that the number of possible open file descriptors should be increased with "ulimit -n"?
Comment 6 jason wang 2016-07-01 01:39:06 EDT
I think libvirt should increase the limitation for us here. Not a qemu-kvm's bug, so I close this as NTOABUG.

Note You need to log in before you can comment on or make changes to this bug.