| Summary: | Booting VM with 232 virtio disks(multifunction=on) caused QEMU error(unable to map ioeventfd: -28) | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | FuXiangChun <xfu> | ||||||
| Component: | qemu-kvm | Assignee: | Amos Kong <akong> | ||||||
| Status: | CLOSED UPSTREAM | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | medium | ||||||||
| Version: | 6.2 | CC: | acathrow, ailan, akong, bcao, bsarathy, juzhang, michen, mkenneth, qzhang, rhod, sluo, tburke, virt-maint, vrozenfe, wdai | ||||||
| Target Milestone: | rc | ||||||||
| Target Release: | 6.3 | ||||||||
| Hardware: | x86_64 | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2012-03-13 08:18:08 UTC | Type: | --- | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Attachments: |
|
||||||||
|
Description
FuXiangChun
2011-11-14 07:34:48 UTC
Created attachment 533447 [details]
win7 BSOD snapshot
Created attachment 533448 [details]
boot 232 virtio disks command line
additional info: repeat above steps, only get BSOD, but cann't get memory dump file. Hi, Vadim Does virtio-win driver support multifunction now ? Do you think this is a virtio-win RFE bug ? Best Regards, Mike if boot rhel6.2 and rhel5.7 guest with 232 virtio disks(enable multifunction=on). they work fine. so only windows guest have this issue. (In reply to comment #6) > Hi, Vadim > > Does virtio-win driver support multifunction now ? > Do you think this is a virtio-win RFE bug ? > > Best Regards, > Mike Hi Mike, Technically, viostor (as any other Windows driver) doesn't care whether it is running on top of a multi or a single function controller. Windows treats function as an a device. However, it could be some limitation from Storport Miniport driver side. I need to try generating and analyzing a crash dump file, before I can give you a more precise answer. Cheers, Vadim. qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28
-28 is ENOSPC error, NR_IOBUS_DEVS of rhel6 is limited, upstream already increased it to 300.
Before apply my patch [1], qemu outputs this error when test with rhel6 guest
After apply this patch to rhel6 kernel, qemu-kvm doesn't output this error.
rhel6 guest: OK (guest can identify 232 disks)
rhel5 guest: OK (guest can identify 232 disks)
winXp guest: OK (guest can identify 232 disks)
win7 guest: BSOD
The BSOD occurs before system booting up, so I don't get the mem-dump.
Need to investigate if it's a bug of virtio-win driver.
[1]
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 72a990d..43252c5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -55,7 +55,7 @@ extern struct kmem_cache *kvm_vcpu_cache;
*/
struct kvm_io_bus {
int dev_count;
-#define NR_IOBUS_DEVS 200
+#define NR_IOBUS_DEVS 300
struct kvm_io_device *devs[NR_IOBUS_DEVS];
};
brew build:
https://brewweb.devel.redhat.com/taskinfo?taskID=3889443
After talked with Ronen, I created a new bz(768981) for splitting this bz to two parts. Before applying patch in comment #9, qemu outputs ENOSPC error for all guests(not only win7) > qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28 > qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace (slower). This is a bug of host kernel, my patch in comment #9 fixed this problem, qemu doesn't output this error after using this patch. But BSOD of win7 guest occurs with/without this patch (virtio-win-1.4.0), but guest doesn't BSOD when using virtio-win-1.2.0, So we will only fix ENOSPC problem in this bz, and continually track win7 BSOD issue in bz(768981). *** Bug 802226 has been marked as a duplicate of this bug. *** It should work with 196 devices. It was tested with 186 devices. Since this is not a customer bug, I am moving it to RHEL6.4 Requires some more upstream work. Upstream qemu will abort when fail to allocate ioeventfd, I will post a patch to fix it.
Internal qemu-kvm will fallback to userspace when it fails to allocate ioeventfd, it's expected. We could not fix this problem by increase iobus dev limitation in kernel, because the limitation will be breached if we use pci-bridge.
So I could close this bug as NOTABUG.
Maybe we can fix the error message by strerror(), but it's not important.
- error_report("%s: unable to unmap ioeventfd: %d",
- __func__, r);
+ error_report("%s: unable to unmap ioeventfd: %s",
+ __func__, strerror(-r));
> Current error message:
qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28
qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace (slower).
> Fixed error message:
qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: No space left on device
qemu-kvm: virtio_pci_start_ioeventfd: failed. Fallback to a userspace (slower).
*** Bug 1130360 has been marked as a duplicate of this bug. *** |