RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 894995 - core dump when install windows guest with x-data-plane=on
Summary: core dump when install windows guest with x-data-plane=on
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Stefan Hajnoczi
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 841111
TreeView+ depends on / blocked
 
Reported: 2013-01-14 09:16 UTC by Sibiao Luo
Modified: 2013-02-21 07:45 UTC (History)
15 users (show)

Fixed In Version: qemu-kvm-0.12.1.2-2.353.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-21 07:45:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0527 0 normal SHIPPED_LIVE qemu-kvm bug fix and enhancement update 2013-02-20 21:51:08 UTC

Description Sibiao Luo 2013-01-14 09:16:46 UTC
Description of problem:
core dump when install win8 64bit guest with x-data-plane=on

Version-Release number of selected component (if applicable):
host info:
# uname -r && rpm -q qemu-kvm
2.6.32-351.el6.x86_64
qemu-kvm-0.12.1.2-2.351.el6.x86_64
virtio-win-1.5.4

geust info:
win8 64bit

How reproducible:
always

Steps to Reproduce:
1.create a 30G disk.
# qemu-img create -f raw windows_8_enterprise_x64.raw 30G
Formatting 'windows_8_enterprise_x64.raw', fmt=raw size=32212254720
2.install win8 64bit guest with x-data-plane=on 
e.g:/usr/libexec/qemu-kvm -cpu SandyBridge -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -no-kvm-pit-reinjection -usb -device usb-tablet,id=input0 -name virtual-blk-data-plane -uuid 990ea161-6b67-47b2-b803-19fb01d30d30 -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x4 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port2 -drive file=/home/windows_8_enterprise_x64.raw,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop,serial="QEMU-DISK1" -device virtio-blk-pci,bus=pci.0,addr=0x5,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk,bootindex=1 -device virtio-balloon-pci,id=ballooning,bus=pci.0,addr=0x6 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -device usb-ehci,id=ehci,bus=pci.0,addr=0x7 -chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1,bus=ehci.0,debug=3 -k en-us -boot menu=on -qmp tcp:0:4444,server,nowait -serial unix:/tmp/ttyS0,server,nowait -monitor stdio -vnc :1 -drive file=/home/en_windows_8_enterprise_x64_dvd_917522.iso,if=none,media=cdrom,id=drive-data-disk,format=raw -device ide-drive,drive=drive-data-disk,id=data-disk,bus=ide.0,unit=0,bootindex=0 -drive file=/usr/share/virtio-win/virtio-win-1.5.4.vfd,if=none,id=drive-fdc0-0-0,readonly=on,format=raw -global isa-fdc.driveA=drive-fdc0-0-0 -drive file=/usr/share/virtio-win/virtio-win-1.5.4.iso,if=none,media=cdrom,format=raw,id=drive-ide1-0-1 -device ide-drive,drive=drive-ide1-0-1,id=ide1-0-1,bus=ide.1,unit=1

Actual results:
after step 2, qemu core dump.
qemu-kvm: /builddir/build/BUILD/qemu-kvm-0.12.1.2/hw/msix.c:644: msix_set_mask_notifier: Assertion `!dev->msix_mask_notifier' failed.

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffef659700 (LWP 10512)]
0x00007ffff57418a5 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00007ffff57418a5 in raise () from /lib64/libc.so.6
#1  0x00007ffff5743085 in abort () from /lib64/libc.so.6
#2  0x00007ffff573aa1e in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007ffff573aae0 in __assert_fail () from /lib64/libc.so.6
#4  0x00007ffff7e0603a in msix_set_mask_notifier (dev=0x7ffff8753010, f=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/msix.c:644
#5  0x00007ffff7df5a17 in virtio_pci_set_guest_notifiers (opaque=0x7ffff8753010, assign=true) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:693
#6  0x00007ffff7e05394 in virtio_blk_data_plane_start (s=0x7ffff916ce40) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/dataplane/virtio-blk.c:391
#7  0x00007ffff7df2f8c in virtio_blk_handle_output (vdev=<value optimized out>, vq=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:436
#8  0x00007ffff7df5670 in virtio_pci_set_host_notifier_internal (proxy=0x7ffff8753010, n=0, assign=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:224
#9  0x00007ffff7e052c2 in virtio_blk_data_plane_stop (s=0x7ffff916ce40) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/dataplane/virtio-blk.c:447
#10 0x00007ffff7df6467 in virtio_set_status (opaque=0x7ffff8753010, addr=<value optimized out>, val=4) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio.h:138
#11 virtio_ioport_write (opaque=0x7ffff8753010, addr=<value optimized out>, val=4) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:367
#12 0x00007ffff7e027f7 in kvm_handle_io (env=0x7ffff8705010) at /usr/src/debug/qemu-kvm-0.12.1.2/kvm-all.c:144
#13 kvm_run (env=0x7ffff8705010) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1048
#14 0x00007ffff7e02a29 in kvm_cpu_exec (env=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1743
#15 0x00007ffff7e0390d in kvm_main_loop_cpu (_env=0x7ffff8705010) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2004
#16 ap_main_loop (_env=0x7ffff8705010) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2060
#17 0x00007ffff7738851 in start_thread () from /lib64/libpthread.so.0
#18 0x00007ffff57f790d in clone () from /lib64/libc.so.6
(gdb) q

Expected results:
install win8 64bit guest with x-data-plane=on successfully.

Additional info:

Comment 1 Sibiao Luo 2013-01-14 10:16:42 UTC
install the win7 64bit with x-data-plane=on also hit this issue that core dump.

install rhel6.4 guest with x-data-plane=on did not hit this issue.

Comment 2 juzhang 2013-01-14 10:50:16 UTC
Set blocker flag because of the following 1 reason from KVM POV.
Windows guest can not be installed with core dump.

Comment 3 Stefan Hajnoczi 2013-01-14 12:25:01 UTC
Thanks for reporting this crash.  I'll send a patch for it tomorrow.

Please remember that Windows guests are not yet supported, that requires a separate patch series which I am also sending.  I will combine the Windows guest fixes and this crash fix into the next BREW build so that Windows testing will be possible.

Comment 4 Sibiao Luo 2013-01-15 01:50:56 UTC
(In reply to comment #3)
> Thanks for reporting this crash.  I'll send a patch for it tomorrow.
> 
> Please remember that Windows guests are not yet supported, that requires a
> separate patch series which I am also sending.
But i remember that virtio-blk-data-plane RPMs v4 are supported the windows guest and i run it with windows guest.
> I will combine the Windows guest fixes and this crash fix into the next BREW 
> build so that Windows testing will be possible.
OK, I am waiting for your patch to run the windows guest functional testing if that.

Comment 5 juzhang 2013-01-15 02:50:54 UTC
(In reply to comment #3)
> Thanks for reporting this crash.  I'll send a patch for it tomorrow.
> 
> Please remember that Windows guests are not yet supported, that requires a
> separate patch series which I am also sending.  I will combine the Windows
> guest fixes and this crash fix into the next BREW build so that Windows
> testing will be possible.
Hi, Stefan

If so, KVM QE will cancel virtual block functional testing with plane enalbed for window guest this time. Keep testing for linux guest.

Please build the brew which support window guest before or in snapshot4, KVM QE need to buffer to run funcational testing with window guest, thanks.

Best Regards & Thanks,
Junyi

Comment 6 Stefan Hajnoczi 2013-01-15 13:43:53 UTC
Please confirm that this build fixes the problem:

https://brewweb.devel.redhat.com/taskinfo?taskID=5273068

I have included the Windows patches in this build, too.  You can use it to do Windows guest testing.

Comment 7 Ademar Reis 2013-01-15 14:54:35 UTC
Hopefully a dupe of Bug 895392. If not, we'll probably have to wait until RHEL6.5 to fix it (unless Stefan has a trivial patch that can be reviewed and included tomorrow, in snapshot 4).

Comment 8 Stefan Hajnoczi 2013-01-15 15:56:39 UTC
It's not a dupe of Bug 895392.

I am sending the fix for this bug upstream and also to the list.  It would be good to include it in Snapshot 4 since it could potentially affect non-Windows guests too.

Comment 9 Sibiao Luo 2013-01-16 10:27:30 UTC
(In reply to comment #6)
> Please confirm that this build fixes the problem:
> 
> https://brewweb.devel.redhat.com/taskinfo?taskID=5273068
> 
> I have included the Windows patches in this build, too.  You can use it to
> do Windows guest testing.

Met a issue on windows guest when test this build. Do stop/cont to the windows guest in HMP monitor. the HMP output vm status is runing via 'info status', but the guest hang indeed, and the HMP monitor will output 'qemu-kvm: Guest moved used index from 0 to 14263'. stefanha said that this error is still related to the core dump, it seems this patch did not fix it completely.

(qemu) stop
(qemu) info status 
VM status: paused
(qemu) cont
(qemu) qemu-kvm: Guest moved used index from 0 to 14263

(qemu) info status 
VM status: running

host info:
2.6.32-355.el6.x86_64
qemu-kvm-0.12.1.2-2.351.el6.test.x86_64
guest info:
window7 64bit

qemu-kvm command line:
# /usr/libexec/qemu-kvm -M rhel6.4.0 -cpu host -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -no-kvm-pit-reinjection -usb -device usb-tablet,id=input0 -name virtual-blk-data-plane -uuid 3a83313c-b83c-4e8f-993c-53440389f893 -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x4 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port2 -drive file=/home/windows_7_ultimate_with_sp1_x64.raw,if=none,id=system-virtio-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop,serial="QEMU-DISK1" -device virtio-blk-pci,bus=pci.0,addr=0x5,scsi=off,x-data-plane=on,drive=system-virtio-disk,id=system-disk,bootindex=1 -device virtio-balloon-pci,id=ballooning,bus=pci.0,addr=0x6 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=08:9E:01:49:D8:5A,bus=pci.0,addr=0x7,bootindex=2 -device usb-ehci,id=ehci,bus=pci.0,addr=0x8 -chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1,bus=ehci.0,debug=3 -k en-us -boot menu=on -qmp tcp:0:4444,server,nowait -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -drive file=/usr/share/virtio-win/virtio-win-1.5.4.vfd,if=none,id=drive-fdc0-0-0,format=raw -global isa-fdc.driveA=drive-fdc0-0-0 -drive file=/usr/share/virtio-win/virtio-win-1.5.4.iso,if=none,media=cdrom,format=raw,id=drive-ide1-0-1 -device ide-drive,drive=drive-ide1-0-1,id=ide1-0-1,bus=ide.0,unit=0 -vnc :1 -spice port=5931,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -monitor stdio -drive file=/home/my-data-disk.raw,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,serial="QEMU-DISK2",bus=pci.0,addr=0x9,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk,serial="QEMU-DISK2"

Comment 10 Sibiao Luo 2013-01-16 12:44:19 UTC
this core dump also can be trigger when boot windows guest with 232 virtio-blk x-data-plane=on for data disk (multifunction=on).

host info:
kernel-2.6.32-355.el6.x86_64
qemu-kvm-0.12.1.2-2.352.el6.x86_64

# ulimit -n 409600
# ulimit -n
409600
qemu-kvm: /builddir/build/BUILD/qemu-kvm-0.12.1.2/hw/msix.c:644: msix_set_mask_notifier: Assertion `!dev->msix_mask_notifier' failed.
multifunction_with_data-plane.sh: line 234:  7514 Aborted                 (core dumped) 

(gdb) bt
#0  0x00007f26338228a5 in raise () from /lib64/libc.so.6
#1  0x00007f2633824085 in abort () from /lib64/libc.so.6
#2  0x00007f263381ba1e in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007f263381bae0 in __assert_fail () from /lib64/libc.so.6
#4  0x00007f2635f261fa in msix_set_mask_notifier (dev=0x7f2636f87220, f=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/msix.c:644
#5  0x00007f2635f15a17 in virtio_pci_set_guest_notifiers (opaque=0x7f2636f87220, assign=true) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:693
#6  0x00007f2635f25394 in virtio_blk_data_plane_start (s=0x7f26383e6690) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/dataplane/virtio-blk.c:440
#7  0x00007f2635f12f8c in virtio_blk_handle_output (vdev=<value optimized out>, vq=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:436
#8  0x00007f2635f15670 in virtio_pci_set_host_notifier_internal (proxy=0x7f2636f87220, n=0, assign=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:224
#9  0x00007f2635f252c2 in virtio_blk_data_plane_stop (s=0x7f26383e6690) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/dataplane/virtio-blk.c:496
#10 0x00007f2635f16467 in virtio_set_status (opaque=0x7f2636f87220, addr=<value optimized out>, val=4) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio.h:138
#11 virtio_ioport_write (opaque=0x7f2636f87220, addr=<value optimized out>, val=4) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:367
#12 0x00007f2635f227f7 in kvm_handle_io (env=0x7f2636dba010) at /usr/src/debug/qemu-kvm-0.12.1.2/kvm-all.c:144
#13 kvm_run (env=0x7f2636dba010) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1048
#14 0x00007f2635f22a29 in kvm_cpu_exec (env=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1743
#15 0x00007f2635f2390d in kvm_main_loop_cpu (_env=0x7f2636dba010) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2004
#16 ap_main_loop (_env=0x7f2636dba010) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2060
#17 0x00007f263584f851 in start_thread () from /lib64/libpthread.so.0
#18 0x00007f26338d890d in clone () from /lib64/libc.so.6
(gdb) q

Comment 14 Stefan Hajnoczi 2013-01-17 15:57:26 UTC
I was unable to reproduce the "qemu-kvm: Guest moved used index from 0 to 14263" error but I think I have a fix for the bug.

Please try the following BREW build.  It includes the Windows support patches, the coredump fix, and the viostor status bit fix.

https://brewweb.devel.redhat.com/taskinfo?taskID=5283664

Details on the root cause:

The viostor guest driver sets the virtio-pci status register differently from the Linux guest driver.  The virtio-blk-data-plane code was incorrectly stopping dataplane because of the unexpected bit pattern from the guest.

When the guest is resumed, the next virtio kick will restart dataplane but the guest and QEMU are no longer in sync.  The result would be the error message you posted.

Comment 15 Sibiao Luo 2013-01-18 02:26:56 UTC
(In reply to comment #14)
> I was unable to reproduce the "qemu-kvm: Guest moved used index from 0 to
> 14263" error but I think I have a fix for the bug.
> 
> Please try the following BREW build.  It includes the Windows support
> patches, the coredump fix, and the viostor status bit fix.
> 
> https://brewweb.devel.redhat.com/taskinfo?taskID=5283664
> 
OK, i will try it and update the result here. btw, i donot know what's the 'viostor status bit' problem, which issue is it ? could you indicate it for me. Thanks in advance.

Best Regards.
sluo

Comment 16 Stefan Hajnoczi 2013-01-18 08:17:38 UTC
(In reply to comment #15)
> (In reply to comment #14)
> > I was unable to reproduce the "qemu-kvm: Guest moved used index from 0 to
> > 14263" error but I think I have a fix for the bug.
> > 
> > Please try the following BREW build.  It includes the Windows support
> > patches, the coredump fix, and the viostor status bit fix.
> > 
> > https://brewweb.devel.redhat.com/taskinfo?taskID=5283664
> > 
> OK, i will try it and update the result here. btw, i donot know what's the
> 'viostor status bit' problem, which issue is it ? could you indicate it for
> me. Thanks in advance.

The viostor status bit problem is the root cause which I described in comment14.

It can cause dataplane to stop while the guest is still using the virtio device.  This may lead to the "qemu-kvm: Guest moved used index from 0 to 14263" error message that you reported.

Stefan

Comment 17 Sibiao Luo 2013-01-18 09:48:30 UTC
(In reply to comment #14)
> I was unable to reproduce the "qemu-kvm: Guest moved used index from 0 to
> 14263" error but I think I have a fix for the bug.
> 
> Please try the following BREW build.  It includes the Windows support
> patches, the coredump fix, and the viostor status bit fix.
> 
> https://brewweb.devel.redhat.com/taskinfo?taskID=5283664
> 
> Details on the root cause:
> 
> The viostor guest driver sets the virtio-pci status register differently
> from the Linux guest driver.  The virtio-blk-data-plane code was incorrectly
> stopping dataplane because of the unexpected bit pattern from the guest.
> 
> When the guest is resumed, the next virtio kick will restart dataplane but
> the guest and QEMU are no longer in sync.  The result would be the error
> message you posted.

I tried this build, and the test results as following:
host info:
# uname -r && rpm -q qemu-kvm
2.6.32-355.el6.x86_64
qemu-kvm-0.12.1.2-2.351.el6.test.x86_64
guest info:
windows7 64bit

1.the issue of 'core dump when install windows guest with x-data-plane=on' has gone, it can install guest successfully.

2.the issue of comment #9 has gone, the HMP monitor will not output 'qemu-kvm: Guest moved used index from 0 to 14263' any more.
(qemu) info status 
VM status: running
(qemu) stop
(qemu) info status 
VM status: paused
(qemu) cont
(qemu) info status 
VM status: running

3.the issue of comment #10 has gone, this is no any core dump when boot windows guest with 232 virtio-blk x-data-plane=on for data disk (multifunction=on). 
But it can not boot up successfully, just stay at 'Starting Windows', it waste me more then 30 minutes to wait for it. I can hit it on qemu-kvm-0.12.1.2-2.352.el6.x86_64, I think this is a qemu-kvm issue, should i need to open a new bug to trace it ?

Best Regards.s
sluo

Comment 18 Sibiao Luo 2013-01-18 10:20:55 UTC
(In reply to comment #17)
> 3.the issue of comment #10 has gone, this is no any core dump when boot
> windows guest with 232 virtio-blk x-data-plane=on for data disk
> (multifunction=on). 
> But it can not boot up successfully, just stay at 'Starting Windows', it
> waste me more then 30 minutes to wait for it. I can hit it on
> qemu-kvm-0.12.1.2-2.352.el6.x86_64, I think this is a qemu-kvm issue, should
> i need to open a new bug to trace it ?
> 
the qemu-kvm will quit after a long time, it was the same issue to Bug 895316 - qemu quit with 'unable to map ioeventfd: -28' when enable multifunction for data-plane using a large file descriptor ulimit.
# ulimit -n 409600
# ulimit -n
409600
# sh multifunction_with_data-plane_copy.sh
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 0.342000 ms, bitrate 20480000000 bps (19531.250000 Mbps)
inputs_connect: inputs channel client create
red_dispatcher_set_cursor_peer: 
qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28
virtio-blk failed to set host notifier

Best Regards.
sluo

Comment 19 Stefan Hajnoczi 2013-01-18 15:45:17 UTC
(In reply to comment #18)
> (In reply to comment #17)
> > 3.the issue of comment #10 has gone, this is no any core dump when boot
> > windows guest with 232 virtio-blk x-data-plane=on for data disk
> > (multifunction=on). 
> > But it can not boot up successfully, just stay at 'Starting Windows', it
> > waste me more then 30 minutes to wait for it. I can hit it on
> > qemu-kvm-0.12.1.2-2.352.el6.x86_64, I think this is a qemu-kvm issue, should
> > i need to open a new bug to trace it ?
> > 
> the qemu-kvm will quit after a long time, it was the same issue to Bug
> 895316 - qemu quit with 'unable to map ioeventfd: -28' when enable
> multifunction for data-plane using a large file descriptor ulimit.
> # ulimit -n 409600
> # ulimit -n
> 409600
> # sh multifunction_with_data-plane_copy.sh
> QEMU 0.12.1 monitor - type 'help' for more information
> (qemu) main_channel_link: add main channel client
> main_channel_handle_parsed: net test: latency 0.342000 ms, bitrate
> 20480000000 bps (19531.250000 Mbps)
> inputs_connect: inputs channel client create
> red_dispatcher_set_cursor_peer: 
> qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28
> virtio-blk failed to set host notifier

This error message is bug895316.

Does the boot complete quickly with x-data-plane=off for all disks?  I'm trying to figure out if the long delay is related to x-data-plane=on or if it's just the way the Windows initializes so many disks at boot.

Comment 20 Sibiao Luo 2013-01-21 02:16:16 UTC
(In reply to comment #19)
> (In reply to comment #18)
> > (In reply to comment #17)
> > > 3.the issue of comment #10 has gone, this is no any core dump when boot
> > > windows guest with 232 virtio-blk x-data-plane=on for data disk
> > > (multifunction=on). 
> > > But it can not boot up successfully, just stay at 'Starting Windows', it
> > > waste me more then 30 minutes to wait for it. I can hit it on
> > > qemu-kvm-0.12.1.2-2.352.el6.x86_64, I think this is a qemu-kvm issue, should
> > > i need to open a new bug to trace it ?
> > > 
> > the qemu-kvm will quit after a long time, it was the same issue to Bug
> > 895316 - qemu quit with 'unable to map ioeventfd: -28' when enable
> > multifunction for data-plane using a large file descriptor ulimit.
> > # ulimit -n 409600
> > # ulimit -n
> > 409600
> > # sh multifunction_with_data-plane_copy.sh
> > QEMU 0.12.1 monitor - type 'help' for more information
> > (qemu) main_channel_link: add main channel client
> > main_channel_handle_parsed: net test: latency 0.342000 ms, bitrate
> > 20480000000 bps (19531.250000 Mbps)
> > inputs_connect: inputs channel client create
> > red_dispatcher_set_cursor_peer: 
> > qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28
> > virtio-blk failed to set host notifier
> 
> This error message is bug895316.
> 
> Does the boot complete quickly with x-data-plane=off for all disks?  I'm
> trying to figure out if the long delay is related to x-data-plane=on or if
> it's just the way the Windows initializes so many disks at boot.
I have tried that boot windows guest with 232 virtio-blk x-data-plane=off data disk (multifunction=on). It can boot up normally and successfully, and check the number of data disk in HMP monitor and guest are 232, and they can be initalized and formatted correctly without any problem.

# sh multifunction_with_data-plane.sh 
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 0.229000 ms, bitrate 1541008276 bps (1469.620014 Mbps)
red_dispatcher_set_cursor_peer: 
inputs_connect: inputs channel client create

host info:
# uname -r && rpm -q qemu-kvm
2.6.32-355.el6.x86_64
qemu-kvm-0.12.1.2-2.351.el6.test.x86_64
guest info:
windows7 64bit

# cat /proc/`pidof qemu-kvm`/limits
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        unlimited            unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             256470               256470               processes 
Max open files            409600               409600               files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       256470               256470               signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us 

Best Regards.
sluo

Comment 21 Ademar Reis 2013-01-22 03:34:37 UTC
Brew build from Stefan: https://brewweb.devel.redhat.com/taskinfo?taskID=5297279

Please test it while the patches are being reviewed.

Comment 22 Stefan Hajnoczi 2013-01-22 10:08:20 UTC
(In reply to comment #21)
> Brew build from Stefan:
> https://brewweb.devel.redhat.com/taskinfo?taskID=5297279
> 
> Please test it while the patches are being reviewed.

There should be no change from the https://brewweb.devel.redhat.com/taskinfo?taskID=5283664 build I previously posted.

The only difference in this new BREW build is that I added RHEL patch information to the commit descriptions - no code changes.

Comment 28 Sibiao Luo 2013-01-24 10:22:18 UTC
Verify this bug qemu-kvm-0.12.1.2-2.355.el6.x86_64 with three scenarios as comment #17.

I tried this build, and the test results as following:
host info:
# uname -r && rpm -q qemu-kvm
2.6.32-356.el6.x86_64
qemu-kvm-0.12.1.2-2.355.el6.x86_64
guest info:
windows8 64bit

- scenario 1
the issue of 'core dump when install windows guest with x-data-plane=on' has gone, it can install guest successfully.

- scenario 2
the issue of comment #9 has gone, the HMP monitor will not output 'qemu-kvm: Guest moved used index from 0 to 14263' any more, it works well.
(qemu) info status 
VM status: running
(qemu) stop
(qemu) info status 
VM status: paused
(qemu) cont
(qemu) info status 
VM status: running

- scenario 3
the issue of comment #10 has gone, this is no any core dump when boot windows guest with 232 virtio-blk x-data-plane=on for data disk (multifunction=on). 
But hit bug #895316 and bug #896326, it can not boot up successfully. 

Base on above, this issue was fixed correctly.

Best Regards.
sluo

Comment 30 errata-xmlrpc 2013-02-21 07:45:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0527.html


Note You need to log in before you can comment on or make changes to this bug.