Bug 1370356
Summary: | [ppc64le] [data-plane]qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier: -24 | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Zhengtong <zhengtli> |
Component: | qemu-kvm-rhev | Assignee: | Thomas Huth <thuth> |
Status: | CLOSED DUPLICATE | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.3 | CC: | knoel, mdeng, qzhang, thuth, virt-maint, zhengtli |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | ppc64le | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-09-06 06:52:02 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Zhengtong
2016-08-26 03:15:50 UTC
This sounds like a duplicate of BZ 1271060 - please use "ulimit -n xxx" to increase the number of file descriptors. However, I wonder why x86 works differently here ... do we need more file descriptors on ppc than on x86 ? hi Thomas, Do you have a suggest value on ulimit? I have set fd number to be 81920 by "ulimit -n 81920", then I ran the guest boot cmd, the error msg still existed. 81920 sounds plenty, so that sounds strange that it is still now working with this value. When I got some spare minutes, I'll try to have a closer look why this is still not sufficient in this case... FWIW, I can reproduce the problem with the following "shortened" command line: /usr/libexec/qemu-kvm -nographic -vga none -enable-kvm -object iothread,id=iothread1 `for ((x=1;x<160;x++)); do echo " -drive file=/scratch/disk$x.qcow2,if=none,id=drive_$x,format=qcow2,cache=none,aio=native -device virtio-scsi-pci,id=virt_$x,iothread=iothread1,multifunction=on,addr=\`printf "%X.%X" $(($x / 8)) $(($x % 8))\` -device scsi-hd,id=scsi_test_$x,drive=drive_$x,bus=virt_$x.0" ; done` Zhengtong, for me, the error messages go away when I use "ulimit -n 81920" ... did you by any chance ran the qemu-kvm program with a different user ID than the ulimit? If not, could you please attach your whole command line that you use to run qemu, in case I missed there something in my "shortened" version? Thanks! Thomas, I always run the qemu-kvm program in "root" account. The full command is the comment #c0 of this bug . I will do the test again to confirm the result, and will update the result here is the result is different with that in my previous test Ok, thanks for confirming that you're running everything as "root" (I though maybe you'd use libvirt here - and libvirt is running the qemu binary as a different user, but sounds like you're running qemu-kvm directly, without libvirt, right?). Another thing to check: What's your global maximum amount of file handles? Could you please run the following command and post the result here: sysctl fs.file-max If that value is very low, please try to increase it with sysctl -w fs.file-max=... and run the test again. Thomas, the host I used to reproduce the issue was released. I reserved another host yesterday. With this host. I can reproduce the issue with default ulimit value. But It disappeared after I set "ulimit -n 81920". The issue never raised up any more after I tried for several times. so, may be there are some wrong configuration in previous test. Above all, I think you original analysis is correct. please change the bug status or give resolve method as your want, thanks. yes, I always boot up the guest with running the qemu-kvm program directly. without libvirt. OK, thanks for checking again! I assume that fs.file-max was likely set to a low value on your original host for some reason, and it was back to a proper value on the second host that you tried. It now sounds like this is the very same problem as in BZ 1271060, so I'm closing this one here as duplicate. *** This bug has been marked as a duplicate of bug 1271060 *** |