RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 877836 - backport virtio-blk data-plane patches
Summary: backport virtio-blk data-plane patches
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Stefan Hajnoczi
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 901493 (view as bug list)
Depends On:
Blocks: 824650 877838
TreeView+ depends on / blocked
 
Reported: 2012-11-19 02:00 UTC by Ademar Reis
Modified: 2014-05-28 17:09 UTC (History)
20 users (show)

Fixed In Version: qemu-kvm-0.12.1.2-2.350.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 877838 (view as bug list)
Environment:
Last Closed: 2013-02-21 07:44:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
log of failing to install windows guest. (222.80 KB, image/png)
2012-12-18 05:09 UTC, Sibiao Luo
no flags Details
screenshot of detail log for failing to install windows. (237.73 KB, image/png)
2012-12-18 05:10 UTC, Sibiao Luo
no flags Details
fail to initialize the the data disk specified x-data-plane=on. (102.47 KB, image/png)
2012-12-18 07:23 UTC, Sibiao Luo
no flags Details
my qemu-kvm command line for multifunction=on to the data-plane. (48.99 KB, application/x-shellscript)
2012-12-18 09:32 UTC, Sibiao Luo
no flags Details
the log of windows 7 64bit guest fail to resume from S4 after hot-plug a data disk. (24.41 KB, image/png)
2012-12-18 11:30 UTC, Sibiao Luo
no flags Details
the log of rhel6.4 guest call trace when resume the guest from S4 operation after hot-plug a virtio_blk x-data-plane=on data disk. (46.25 KB, text/plain)
2012-12-20 03:33 UTC, Sibiao Luo
no flags Details
the log of call trace when do S3 with x-data-plane=on. (161.59 KB, text/plain)
2012-12-20 04:47 UTC, Sibiao Luo
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0527 0 normal SHIPPED_LIVE qemu-kvm bug fix and enhancement update 2013-02-20 21:51:08 UTC

Description Ademar Reis 2012-11-19 02:00:01 UTC
We want a backport of the data-plane support on RHEL6, maybe as tech-preview feature on the first interaction.

From the latest patch submission from Stefan Hajnoczi:
(http://comments.gmane.org/gmane.comp.emulators.qemu/180530)

This series adds the -device virtio-blk-pci,x-data-plane=on property that
enables a high performance I/O codepath.  A dedicated thread is used to process
virtio-blk requests outside the global mutex and without going through the QEMU
block layer.

Khoa Huynh <khoa <at> us.ibm.com> reported an increase from 140,000 IOPS to 600,000
IOPS for a single VM using virtio-blk-data-plane in July:

  http://comments.gmane.org/gmane.comp.emulators.kvm.devel/94580

The virtio-blk-data-plane approach was originally presented at Linux Plumbers
Conference 2010.  The following slides contain a brief overview:

  http://linuxplumbersconf.org/2010/ocw/system/presentations/651/original/Optimizing_the_QEMU_Storage_Stack.pdf

The basic approach is:
1. Each virtio-blk device has a thread dedicated to handling ioeventfd
   signalling when the guest kicks the virtqueue.
2. Requests are processed without going through the QEMU block layer using
   Linux AIO directly.
3. Completion interrupts are injected via irqfd from the dedicated thread.

To try it out:

  qemu -drive if=none,id=drive0,cache=none,aio=native,format=raw,file=...
       -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on

Limitations:
 * Only format=raw is supported
 * Live migration is not supported

Comment 23 Sibiao Luo 2012-12-18 05:07:17 UTC
Hi Stefan,

  Fail to install win7 64bit guest with the x-data-plane=on using the RPMs v3, with the prompt is 'Setup was unable to create a new system partiotion or locate an existing system partition', i will atttach the log of screenshot later.
Stefan, does it the WHQL driver bug, should we need to ask Vadim for help ?

BTW, 
1.if i use the x-data-plane=off, it can install windows guest successfully.
2.it can install rhel6.4 64bit guest with x-data-plane=on using the RPMs v3 successfully.

host info:
kernel-2.6.32-348.el6.x86_64
qemu-kvm-0.12.1.2-2.345.el6.test.x86_64
guest info:
win7 64bit
virtio-win-1.5.4-1

My qemu-kvm command line:
# /usr/libexec/qemu-kvm -M rhel6.4.0 -cpu host -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -usb -device usb-tablet,id=input0 -name data-plane -uuid 990ea161-6b67-47b2-b803-19fb01d30d30 -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x3 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port2 -drive file=/home/windows_7_ultimate_sp1_x64.raw,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x4,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=hostnet0,id=virtio-net-pci0,mac=BC:96:9D:05:51:EC,bus=pci.0,addr=0x5 -balloon none -spice port=5931,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -boot menu=on -monitor stdio -drive file=/home/en_windows_7_ultimate_with_sp1_x64_dvd_618240.iso,if=none,media=cdrom,format=raw,id=drive-ide0-0-0 -device ide-drive,drive=drive-ide0-0-0,id=ide0-0-0,bus=ide.0,unit=0,bootindex=0 -drive file=/usr/share/virtio-win/virtio-win-1.5.4.iso,if=none,media=cdrom,format=raw,id=drive-ide1-0-1 -device ide-drive,drive=drive-ide1-0-1,id=ide1-0-1,bus=ide.1,unit=1 -drive file=/usr/share/virtio-win/virtio-win-1.5.4.vfd,if=none,id=drive-fdc0-0-0,readonly=on,format=raw -global isa-fdc.driveA=drive-fdc0-0-0

Best Regards.
sluo

Comment 24 Sibiao Luo 2012-12-18 05:09:47 UTC
Created attachment 665295 [details]
log of failing to install windows guest.

Comment 25 Sibiao Luo 2012-12-18 05:10:44 UTC
Created attachment 665296 [details]
screenshot of detail log for failing to install windows.

Comment 26 Mike Cao 2012-12-18 05:17:10 UTC
(In reply to comment #23)
> Hi Stefan,
> 
>   Fail to install win7 64bit guest with the x-data-plane=on using the RPMs
> v3, with the prompt is 'Setup was unable to create a new system partiotion
> or locate an existing system partition', i will atttach the log of
> screenshot later.
> Stefan, does it the WHQL driver bug, should we need to ask Vadim for help ?
> 
Can you test x-data-plan on the data image with preinstall windows VM?

Comment 27 Sibiao Luo 2012-12-18 06:06:23 UTC
(In reply to comment #23)
> host info:
> kernel-2.6.32-348.el6.x86_64
> qemu-kvm-0.12.1.2-2.345.el6.test.x86_64
SeaBIOS: seabios-0.6.1.2-26.el6
> guest info:
> win7 64bit
> virtio-win-1.5.4-1
> 
(In reply to comment #26)
> >   Fail to install win7 64bit guest with the x-data-plane=on using the RPMs
> > v3, with the prompt is 'Setup was unable to create a new system partiotion
> > or locate an existing system partition', i will atttach the log of
> > screenshot later.
> > Stefan, does it the WHQL driver bug, should we need to ask Vadim for help ?
> > 
> Can you test x-data-plan on the data image with preinstall windows VM?
yes, thx for your kindly reminds, I have to run my test as this way that just specify the x-data-plane=on for data disk.

If specify the x-data-plane=on for the system disk, it will fail to boot up with error that 'A disk read error occurred', the detail log will be pasted as following:
eg:...-drive file=/home/windows_7_ultimate_sp1_x64.raw,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x4,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk,bootindex=1...

Google, Inc.
Serial Graphics Adapter 07/26/11
SGABIOS $Id: sgabios.S 8 2010-04-22 00:03:40Z nlaredo $ (mockbuild.redhat.com) Tue Jul 26 15:05:08 UTC 2011
4 0

SeaBIOS (version seabios-0.6.1.2-26.el6)

gPXE (http://etherboot.org) - 00:05.0 CB00 PCI2.10 PnP BBS PMM7FE0@10 CB00
Press Ctrl-B to configure gPXE (PCI 00:05.0)...main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 0.077000 ms, bitrate 9266968325 bps (8837.669683 Mbps)
inputs_connect: inputs channel client create
red_dispatcher_set_cursor_peer: 
                                                                               
Press F12 for boot menu.

Select boot device:

1. Virtio disk PCI:0:4           <---------select 1
2. Virtio disk PCI:0:6
3. Floppy [drive A]
4. DVD/CD [ata1-1: QEMU DVD-ROM ATAPI-4 DVD/CD]
5. gPXE (PCI 00:05.0)
6. Legacy option rom

Booting from Hard Disk...

A disk read error occurred
Press Ctrl+Alt+Del to restart

Comment 28 Sibiao Luo 2012-12-18 07:22:07 UTC
Hi Stefan,

  Fail to initialize the the data disk specified x-data-plane=on via 'Device Manager' in win7 64bit guest using the RPMs v3, with the prompt error 'The request could not be performed because of an I/O device error', i will atttach the log of screenshot later.

e.g:...-drive file=/home/windows_7_ultimate_sp1_x64.raw,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x4,scsi=off,x-data-plane=off,drive=drive-virtio-disk,id=virtio-disk,bootindex=1...-drive file=/dev/vg-qz/sluo-data-disk,if=none,id=drive-data-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x6,scsi=off,x-data-plane=on,drive=drive-data-disk,id=data-disk...

BTW, if specified x-data-plane=off to the data disk, it can be initialized and formatted successfully.

host info:
kernel-2.6.32-348.el6.x86_64
qemu-kvm-0.12.1.2-2.345.el6.test.x86_64
guest info:
win7 64bit
virtio-win-prewhql-0.1-49 

Best Regards.
sluo

Comment 29 Sibiao Luo 2012-12-18 07:23:06 UTC
Created attachment 665340 [details]
fail to initialize the the data disk specified x-data-plane=on.

Comment 30 Sibiao Luo 2012-12-18 09:29:59 UTC
Hi Stefan,

   Fail to boot ide system disk with 232 data disk(multifunction=on) specified x-data-plane=on if i specified the file descriptor rlimit(409600), the qemu will quit and prompt:
qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28
virtio-blk failed to set host notifier

host info:
kernel-2.6.32-348.el6.x86_64
qemu-kvm-0.12.1.2-2.346.el6.test.x86_64 (V4)
guest info:
rhel6.4 64bit
kernel-2.6.32-348.el6.x86_64

# ulimit -n 409600
# ulimit -n
409600

# sh multifunction_with_data-plane.sh 
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 0.098000 ms, bitrate 20480000000 bps (19531.250000 Mbps)
red_dispatcher_set_cursor_peer: 
inputs_connect: inputs channel client create
qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28
virtio-blk failed to set host notifier

# cat /proc/`pidof qemu-kvm`/limits
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        unlimited            unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             62359                62359                processes 
Max open files            409600               409600               files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       62359                62359                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us

Comment 31 Sibiao Luo 2012-12-18 09:32:29 UTC
Created attachment 665383 [details]
my qemu-kvm command line for multifunction=on to the data-plane.

Comment 32 Sibiao Luo 2012-12-18 09:45:52 UTC
(In reply to comment #30)

>    Fail to boot ide system disk with 232 data disk(multifunction=on)
> specified x-data-plane=on if i specified the file descriptor rlimit(409600),
> the qemu will quit and prompt:
> qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28
> virtio-blk failed to set host notifier
> 
> host info:
> kernel-2.6.32-348.el6.x86_64
> qemu-kvm-0.12.1.2-2.346.el6.test.x86_64 (V4)
> guest info:
> rhel6.4 64bit
> kernel-2.6.32-348.el6.x86_64
> 
> # ulimit -n 409600
> # ulimit -n
> 409600
> 
If i test windows 7 guest with the same scenario, it fail to boot up but stay at 'Starting Windows' screen, i wait it for more than 15 min, but still stay there failing to boot up.

host info:
kernel-2.6.32-348.el6.x86_64
qemu-kvm-0.12.1.2-2.346.el6.test.x86_64 (V4)
guest info:
windows 7 guest
virtio-win-prewhql-0.1-49

# ulimit -n 409600
# ulimit -n
409600

# sh multifunction_with_data-plane.sh 
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) main_channel_link: add main channel client
main_channel_handle_parsed: net test: invalid values, latency 0 roundtrip 1057. assuming highbandwidth
red_dispatcher_set_cursor_peer: 
inputs_connect: inputs channel client create

(qemu) info status 
VM status: running

# cat /proc/`pidof qemu-kvm`/limits
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        unlimited            unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             62359                62359                processes 
Max open files            409600               409600               files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       62359                62359                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us

Best Regards.
sluo

Comment 33 Sibiao Luo 2012-12-18 11:29:13 UTC
Hi Stefan,

   The windows 7 64bit guest fail to resume from S4 after hot-plug a data disk which no matter specified x-data-plane=on or off, the error as following:
--------------------------------------------------------------------------------
Your computer can't come out of hibernation.

    staus: 0xc0000411

    InfoL: A fatal error occurred processing the restoration data.

    File: \hiberfil.sys

Any information that was not saved befor the computer went into hibernation will be lost.
--------------------------------------------------------------------------------

host info:
kernel-2.6.32-348.el6.x86_64
qemu-kvm-0.12.1.2-2.346.el6.test.x86_64 (V4)
guest info:
windows 7 guest
virtio-win-prewhql-0.1-49

Steps:
1.boot a windows 7 64bit guest.
# /usr/libexec/qemu-kvm -M rhel6.4.0 -cpu host -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -usb -device usb-tablet,id=input0 -name data-plane -uuid 990ea161-6b67-47b2-b803-19fb01d30d30 -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x3 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port2 -drive file=/home/windows_7_ultimate_sp1_x64.raw,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x4,scsi=off,x-data-plane=off,drive=drive-virtio-disk,id=virtio-disk,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=hostnet0,id=virtio-net-pci0,mac=BC:96:9D:05:51:EC,bus=pci.0,addr=0x5 -balloon none -spice port=5931,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -boot menu=on -monitor stdio -drive file=/usr/share/virtio-win/virtio-win-1.5.4.iso,if=none,media=cdrom,format=raw,id=drive-ide1-0-1 -device ide-drive,drive=drive-ide1-0-1,id=ide1-0-1,bus=ide.1,unit=1 -drive file=/usr/share/virtio-win/virtio-win-1.5.4.vfd,if=none,id=drive-fdc0-0-0,readonly=on,format=raw -global isa-fdc.driveA=drive-fdc0-0-0
2.hot-plug a data disk with x-data-plane=on or off.
(qemu) __com.redhat_drive_add file=/home/my-data-disk.raw,id=drive-data-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop
(qemu) device_add virtio-blk-pci,bus=pci.0,addr=0x6,scsi=off,x-data-plane=on/off,drive=drive-data-disk,id=data-disk  <-------both 'on' and 'off' can hit it.
(qemu) info block
drive-virtio-disk: removable=0 io-status=ok file=/home/windows_7_ultimate_sp1_x64.raw ro=0 drv=raw encrypted=0
drive-ide1-0-1: removable=1 locked=0 tray-open=0 io-status=ok file=/usr/share/virtio-win/virtio-win-1.5.4.iso ro=1 drv=raw encrypted=0
drive-fdc0-0-0: removable=1 locked=0 tray-open=0 file=/usr/share/virtio-win/virtio-win-1.5.4.vfd ro=1 drv=raw encrypted=0
sd0: removable=1 locked=0 tray-open=0 [not inserted]
drive-data-disk: removable=0 io-status=ok file=/home/my-data-disk.raw ro=0 drv=raw encrypted=0
3.do S4 via 'Start-->hibernate'.
4.resume the VM appending the data disk commands.
<the same as step 1> -drive file=/home/my-data-disk.raw,if=none,id=drive-data-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x6,scsi=off,x-data-plane=on,drive=drive-data-disk,id=data-disk

Results:
windows 7 64bit guest fail to resume from S4 after hot-plug a data disk, both  x-data-plane=on and x-data-plane=off can hit this issue, i will attach the logs of screenshot.

BTW, 
1.I test the rhel6.4 64bit guest with the same scenario, the rhel6.4 64bit guest have no such issue.
2.If do not do any hot-plug, just specify x-data-plane=on/off in cli directly, then do S4, the windows guest can resume successfully.

Comment 34 Sibiao Luo 2012-12-18 11:30:55 UTC
Created attachment 665455 [details]
the log of windows 7 64bit guest fail to resume from S4 after hot-plug a data disk.

Comment 35 Stefan Hajnoczi 2012-12-18 12:05:47 UTC
(In reply to comment #23)
>   Fail to install win7 64bit guest with the x-data-plane=on using the RPMs
> v3, with the prompt is 'Setup was unable to create a new system partiotion
> or locate an existing system partition', i will atttach the log of
> screenshot later.
> Stefan, does it the WHQL driver bug, should we need to ask Vadim for help ?

It fails because the guest drivers submit unaligned buffers and virtio-blk-data-plane does not support this yet.

I will send a separate fix to get Windows guests working.  That way we can support Linux guests in RHEL6.4 Snapshot 2 and get additional testing from partners (e.g. I know IBM has been asking for virtio-blk-data-plane in a snapshot and they will test it).

Stefan

Comment 36 Stefan Hajnoczi 2012-12-18 12:07:41 UTC
(In reply to comment #33)
>    The windows 7 64bit guest fail to resume from S4 after hot-plug a data
> disk which no matter specified x-data-plane=on or off, the error as
> following:
> -----------------------------------------------------------------------------
> ---
> Your computer can't come out of hibernation.
> 
>     staus: 0xc0000411
> 
>     InfoL: A fatal error occurred processing the restoration data.
> 
>     File: \hiberfil.sys
> 
> Any information that was not saved befor the computer went into hibernation
> will be lost.
> -----------------------------------------------------------------------------
> ---

Should this be a separate bugzilla since it's not related to x-data-plane=on|off?

Comment 37 Stefan Hajnoczi 2012-12-18 12:40:25 UTC
(In reply to comment #30)
>    Fail to boot ide system disk with 232 data disk(multifunction=on)
> specified x-data-plane=on if i specified the file descriptor rlimit(409600),
> the qemu will quit and prompt:
> qemu-kvm: virtio_pci_set_host_notifier_internal: unable to map ioeventfd: -28
> virtio-blk failed to set host notifier

There is a hardcoded maximum number of ioeventfds in the kvm.ko kernel module.  -28 (ENOSPC) means you tried to create so many devices that the kvm.ko kernel module cannot assign more ioeventfds.

The same thing happens if you try to use too many vhost-net network interfaces.

I will investigate if there's a way we can avoid hitting the limit with 232 disks but there will always be a hard limit unless we change the way iobus devices inside kvm.ko are implemented.

Stefan

Comment 38 Sibiao Luo 2012-12-19 02:37:12 UTC
(In reply to comment #36)
> (In reply to comment #33)
> >    The windows 7 64bit guest fail to resume from S4 after hot-plug a data
> > disk which no matter specified x-data-plane=on or off, the error as
> > following:
> > -----------------------------------------------------------------------------
> > ---
> > Your computer can't come out of hibernation.
> > 
> >     staus: 0xc0000411
> > 
> >     InfoL: A fatal error occurred processing the restoration data.
> > 
> >     File: \hiberfil.sys
> > 
> > Any information that was not saved befor the computer went into hibernation
> > will be lost.
> > -----------------------------------------------------------------------------
> > ---
> 
> Should this be a separate bugzilla since it's not related to
> x-data-plane=on|off?
Yes, this is a existing bug 811841 for virtio-win.

Comment 39 Mike Cao 2012-12-19 02:46:27 UTC
(In reply to comment #38)
> (In reply to comment #36)
> > (In reply to comment #33)
> > >    The windows 7 64bit guest fail to resume from S4 after hot-plug a data
> > > disk which no matter specified x-data-plane=on or off, the error as
> > > following:
> > > -----------------------------------------------------------------------------
> > > ---
> > > Your computer can't come out of hibernation.
> > > 
> > >     staus: 0xc0000411
> > > 
> > >     InfoL: A fatal error occurred processing the restoration data.
> > > 
> > >     File: \hiberfil.sys
> > > 
> > > Any information that was not saved befor the computer went into hibernation
> > > will be lost.
> > > -----------------------------------------------------------------------------
> > > ---
> > 
> > Should this be a separate bugzilla since it's not related to
> > x-data-plane=on|off?
> Yes, this is a existing bug 811841 for virtio-win.
Have you tried hotplug/unplug during the test ? If not ,I don't think they are the same bug

Comment 40 Sibiao Luo 2012-12-19 03:44:19 UTC
(In reply to comment #39)
> (In reply to comment #38)
> > (In reply to comment #36)
> > > (In reply to comment #33)
> > > >    The windows 7 64bit guest fail to resume from S4 after hot-plug a data
> > > > disk which no matter specified x-data-plane=on or off, the error as
> > > > following:
> > > > -----------------------------------------------------------------------------
> > > > ---
> > > > Your computer can't come out of hibernation.
> > > > 
> > > >     staus: 0xc0000411
> > > > 
> > > >     InfoL: A fatal error occurred processing the restoration data.
> > > > 
> > > >     File: \hiberfil.sys
> > > > 
> > > > Any information that was not saved befor the computer went into hibernation
> > > > will be lost.
> > > > -----------------------------------------------------------------------------
> > > > ---
> > > 
> > > Should this be a separate bugzilla since it's not related to
> > > x-data-plane=on|off?
> > Yes, this is a existing bug 811841 for virtio-win.
> Have you tried hotplug/unplug during the test ? If not ,I don't think they
> are the same bug
sure, refer to comment #33, i have also retested it without x-data-plane that also hit it, just the same as bug 811841.

Comment 41 Sibiao Luo 2012-12-19 06:22:24 UTC
Hi Stefan,

  Should the x-data-plane=on need to support in QMP commands ? I have tried it, but fail to hot-plug the virtio_blk data disk with x-data-plane=on in QMP monitor. btw, if use x-data-plane=off, it can sucessfully.
e.g:
->{"execute":"qmp_capabilities"}
<-{"return": {}}

->{"execute":"__com.redhat_drive_add","arguments": {"file":"/home/my-data-disk.raw","format":"raw","id":"drive-data-disk","cache":"none","werror":"stop","rerror":"stop"}}
<-{"return": {}}

->{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":"drive-data-disk","id":"data-disk","bus":"pci.0","scsi":"off","x-data-plane":"on"}}
{"error": {"class": "DeviceInitFailed", "desc": "Device 'virtio-blk-pci' could not be initialized", "data": {"device": "virtio-blk-pci"}}}

But, if i specify the x-data-plane=off, it can use the QMP command successfully.
->{"execute":"__com.redhat_drive_add","arguments": {"file":"/home/my-data-disk.raw","format":"raw","id":"drive-data-disk","cache":"none","werror":"stop","rerror":"stop"}}
<-{"return": {}}

->{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":"drive-data-disk","id":"data-disk","bus":"pci.0","scsi":"off","x-data-plane":"off"}}
<-{"return": {}}

->{"execute":"query-block"} 
<-{"return": [...{"io-status": "ok", "device": "drive-data-disk", "locked": false, "removable": false, "inserted": {"ro": false, "drv": "raw", "encrypted": false, "file": "/home/my-data-disk.raw"}, "type": "unknown"}]}

Best Regards.
sluo

Comment 42 Sibiao Luo 2012-12-19 06:55:56 UTC
(In reply to comment #41)
> ->{"execute":"__com.redhat_drive_add","arguments":
> {"file":"/home/my-data-disk.raw","format":"raw","id":"drive-data-disk",
> "cache":"none","werror":"stop","rerror":"stop"}}
> <-{"return": {}}
> 
> ->{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":
> "drive-data-disk","id":"data-disk","bus":"pci.0","scsi":"off","x-data-plane":
> "on"}}
> {"error": {"class": "DeviceInitFailed", "desc": "Device 'virtio-blk-pci'
> could not be initialized", "data": {"device": "virtio-blk-pci"}}}
> 
->{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":"drive-data-disk","id":"data-disk","bus":"pci.0","scsi":"off","x-data-plane":"on"}}
{"error": {"class": "DeviceInitFailed", "desc": "Device 'virtio-blk-pci' could not be initialized", "data": {"device": "virtio-blk-pci"}}}
->{"execute":"query-kvm"}
<-{"return": {"enabled": true, "present": true}}

I have tried many times, all of them were fail. Maybe the x-data-plane=on is bug to the QMP commands. Stefan, any idea about it ?

Best Regards.
sluo

Comment 43 Sibiao Luo 2012-12-19 07:02:52 UTC
Hi Stefan,
 
   Why the 'scsi' must be 'off' when use x-data-plane=on to the virtio_blk_pci ? Does this the restriction of x-data-plane ? btw, it could boot up with scsi=on for x-data-plane=off to the virtio_blk_pci successfully.

e.g:...-drive file=/home/my-data-disk.raw,if=none,id=drive-data-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x6,scsi=on,x-data-plane=on,drive=drive-data-disk,id=data-disk
qemu-kvm: -device virtio-blk-pci,bus=pci.0,addr=0x6,scsi=on,x-data-plane=on,drive=drive-data-disk,id=data-disk: device is incompatible with x-data-plane, use scsi=off
qemu-kvm: -device virtio-blk-pci,bus=pci.0,addr=0x6,scsi=on,x-data-plane=on,drive=drive-data-disk,id=data-disk: Device 'virtio-blk-pci' could not be initialized

Best Regards.
sluo

Comment 44 Stefan Hajnoczi 2012-12-19 09:32:00 UTC
(In reply to comment #41)
>   Should the x-data-plane=on need to support in QMP commands ?

Yes.

> I have tried
> it, but fail to hot-plug the virtio_blk data disk with x-data-plane=on in
> QMP monitor. btw, if use x-data-plane=off, it can sucessfully.
> e.g:
> ->{"execute":"qmp_capabilities"}
> <-{"return": {}}
> 
> ->{"execute":"__com.redhat_drive_add","arguments":
> {"file":"/home/my-data-disk.raw","format":"raw","id":"drive-data-disk",
> "cache":"none","werror":"stop","rerror":"stop"}}
> <-{"return": {}}

You did not specify "aio": "native".  Therefore this -drive is not compatible with x-data-plane=on.

Stefan

Comment 45 Stefan Hajnoczi 2012-12-19 09:37:25 UTC
(In reply to comment #43)
>    Why the 'scsi' must be 'off' when use x-data-plane=on to the
> virtio_blk_pci ? Does this the restriction of x-data-plane ? btw, it could
> boot up with scsi=on for x-data-plane=off to the virtio_blk_pci successfully.

virtio-blk-data-plane does not implement the VIRTIO_BLK_T_SCSI command yet.  Therefore we require scsi=off so the user expects SCSI requests to fail.

The check for scsi=off could be removed but then existing VMs that used SCSI with x-data-plane=off would start getting errors with x-data-plane=on.  The user would not expect this so it's clearer to demand scsi=off.

Stefan

Comment 46 Sibiao Luo 2012-12-19 09:51:13 UTC
(In reply to comment #44)
> (In reply to comment #41)
> >   Should the x-data-plane=on need to support in QMP commands ?
> 
> Yes.
> 
> > I have tried
> > it, but fail to hot-plug the virtio_blk data disk with x-data-plane=on in
> > QMP monitor. btw, if use x-data-plane=off, it can sucessfully.
> > e.g:
> > ->{"execute":"qmp_capabilities"}
> > <-{"return": {}}
> > 
> > ->{"execute":"__com.redhat_drive_add","arguments":
> > {"file":"/home/my-data-disk.raw","format":"raw","id":"drive-data-disk",
> > "cache":"none","werror":"stop","rerror":"stop"}}
> > <-{"return": {}}
> 
> You did not specify "aio": "native".  Therefore this -drive is not
> compatible with x-data-plane=on.
> 
hmm, I forget it, thanks for your kindly reminds, that make sense, I check it as following.

->{"execute":"__com.redhat_drive_add","arguments": {"file":"/home/my-data-disk.raw","format":"raw","id":"drive-data-disk","cache":"none","aio":"native","werror":"stop","rerror":"stop"}}
<-{"return": {}}

->{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":"drive-data-disk","id":"data-disk","bus":"pci.0","addr":"0x6","scsi":"off","x-data-plane":"on"}}
<-{"return": {}}

->{"execute":"query-block"}
<-{"return": [...{"io-status": "ok", "device": "drive-data-disk", "locked": false, "removable": false, "inserted": {"ro": false, "drv": "raw", "encrypted": false, "file": "/home/my-data-disk.raw"}, "type": "unknown"}]}

Best Regards.
sluo

Comment 47 Sibiao Luo 2012-12-20 03:30:32 UTC
Hi Stefan,

   The rhel6.4 guest call trace when resume the guest from S4 operation after hot-plug a virtio_blk x-data-plane=on data disk.
Btw, if i do the same scenario without x-data-plane=on, the guest have no any call trace, it can resume successfully.

host info:
# uname -r && rpm -q qemu-kvm
2.6.32-348.el6.x86_64
qemu-kvm-0.12.1.2-2.346.el6.test.x86_64 (V4)
guest info:
# uname -r
2.6.32-348.el6.x86_64

Steps:
1.boot a rhel6.4 guest.
2.hot plug a x-data-plane=on virtio_blk data disk.
(qemu) __com.redhat_drive_add file=/home/my-data-disk.raw,serial="QEMU-DISK2",id=drive-data-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop
(qemu) device_add virtio-blk-pci,bus=pci.0,addr=0x6,scsi=off,x-data-plane=on,drive=drive-data-disk,id=data-disk
3.do S4.
# pm_hibernate
4.resume the guest.
<the same as step 1 cli>-drive file=/home/my-data-disk.raw,if=none,id=drive-data-disk,serial="QEMU-DISK2",format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x6,scsi=off,x-data-plane=on,drive=drive-data-disk,id=data-disk

Results:
after the step 2, the guest kernel will prompt 'vdb: unknown partition table', that's expected.
after the step 4, the guest will call trace, i will attach the detail log later.
...
Restarting tasks ... done.
nm-dispatcher.a[2633]: segfault at 0 ip (null) sp 00007fffa8859408 error 14 in nm-dispatcher.action[400000+5000]
BUG: Bad page map in process nm-dispatcher.a  pte:16b9bd065 pmd:37389067
addr:00000030ce42d000 vm_flags:08100073 anon_vma:ffff8800370696f0 mapping:ffff88007c8099e0 index:2d
vma->vm_ops->fault: filemap_fault+0x0/0x500
vma->vm_file->f_op->mmap: ext4_file_mmap+0x0/0x60 [ext4]
Pid: 2633, comm: nm-dispatcher.a Not tainted 2.6.32-348.el6.x86_64 #1
Call Trace:
 [<ffffffff8113ec68>] ? print_bad_pte+0x1d8/0x290
 [<ffffffff8113ed8b>] ? vm_normal_page+0x6b/0x70
 [<ffffffff8113f0fc>] ? follow_page+0x2cc/0x470
 [<ffffffff81144430>] ? __get_user_pages+0x110/0x430
 [<ffffffff8114478c>] ? get_dump_page+0x3c/0x50
 [<ffffffff811db5ea>] ? elf_core_dump+0xe2a/0xfe0
 [<ffffffff81055a43>] ? __wake_up+0x53/0x70
 [<ffffffff8108f6ab>] ? call_usermodehelper_exec+0xab/0x120
 [<ffffffff811876b4>] ? do_coredump+0x814/0xc00
 [<ffffffff81084f4d>] ? __sigqueue_free+0x3d/0x50
 [<ffffffff81088d4d>] ? get_signal_to_deliver+0x1ed/0x460
 [<ffffffff8100a265>] ? do_signal+0x75/0x800
 [<ffffffff8150c86f>] ? printk+0x41/0x4a
 [<ffffffff8121b766>] ? security_file_permission+0x16/0x20
 [<ffffffff8100aa80>] ? do_notify_resume+0x90/0xc0
 [<ffffffff8100badc>] ? retint_signal+0x48/0x8c
Disabling lock debugging due to kernel taint
swap_free: Bad swap offset entry 00800000
...

Best Regards.
sluo

Comment 48 Sibiao Luo 2012-12-20 03:33:07 UTC
Created attachment 666505 [details]
the log of rhel6.4 guest call trace when resume the guest from S4 operation after hot-plug a virtio_blk x-data-plane=on data disk.

Comment 49 Sibiao Luo 2012-12-20 03:52:26 UTC
(In reply to comment #47)
> Hi Stefan,
> 
>    The rhel6.4 guest call trace when resume the guest from S4 operation
> after hot-plug a virtio_blk x-data-plane=on data disk.
> Btw, if i do the same scenario without x-data-plane=on, the guest have no
> any call trace, it can resume successfully.
> 
> host info:
> # uname -r && rpm -q qemu-kvm
> 2.6.32-348.el6.x86_64
> qemu-kvm-0.12.1.2-2.346.el6.test.x86_64 (V4)
> guest info:
> # uname -r
> 2.6.32-348.el6.x86_64
How reproducible:
almost every time

My whole qemu-kvm-commands:
# /usr/libexec/qemu-kvm -M rhel6.4.0 -cpu host -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -usb -device usb-tablet,id=input0 -name data-plane -uuid 990ea161-6b67-47b2-b803-19fb01d30d30 -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x3 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port2 -drive file=/home/RHEL6.4-20121212.1-Server-x86_64.raw,serial="QEMU-DISK1",if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x4,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=hostnet0,id=virtio-net-pci0,mac=BC:96:9D:05:51:EC,bus=pci.0,addr=0x5 -balloon none -spice port=5931,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -boot menu=on -qmp tcp:0:4444,server,nowait -serial unix:/tmp/ttyS0,server,nowait -monitor stdio -drive file=/usr/share/virtio-win/virtio-win-1.5.4.iso,if=none,media=cdrom,format=raw,id=drive-ide1-0-1 -device ide-drive,drive=drive-ide1-0-1,id=ide1-0-1,bus=ide.1,unit=1 -drive file=/usr/share/virtio-win/virtio-win-1.5.4.vfd,if=none,id=drive-fdc0-0-0,readonly=on,format=raw -global isa-fdc.driveA=drive-fdc0-0-0 -drive file=/home/my-data-disk.raw,if=none,id=drive-data-disk,serial="QEMU-DISK2",format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x6,scsi=off,x-data-plane=on,drive=drive-data-disk,id=data-disk

Comment 50 Sibiao Luo 2012-12-20 04:46:24 UTC
Hi Stefan,

   The rhel6.4 guest call trace when resume the guest from S3 operation with x-data-plane=on to the virtio_blk system disk.
Btw, if i do the same scenario without x-data-plane=on, the guest have no any call trace, it can resume successfully.

host info:
# uname -r && rpm -q qemu-kvm
2.6.32-348.el6.x86_64
qemu-kvm-0.12.1.2-2.346.el6.test.x86_64 (V4)
guest info:
# uname -r
2.6.32-348.el6.x86_64

How reproducible:
always

Steps:
1.boot a rhel6.4 guest with x-data-plane=on to the virtio_blk system disk.
# /usr/libexec/qemu-kvm -M rhel6.4.0 -cpu host -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -usb -device usb-tablet,id=input0 -name data-plane -uuid 990ea161-6b67-47b2-b803-19fb01d30d30 -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x3 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port2 -drive file=/home/RHEL6.4-20121212.1-Server-x86_64.raw,serial="QEMU-DISK1",if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop -device virtio-blk-pci,bus=pci.0,addr=0x4,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device e1000,netdev=hostnet0,id=virtio-net-pci0,mac=BC:96:9D:05:51:EC,bus=pci.0,addr=0x5 -balloon none -spice port=5931,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -boot menu=on -qmp tcp:0:4444,server,nowait -serial unix:/tmp/ttyS0,server,nowait -monitor stdio -drive file=/usr/share/virtio-win/virtio-win-1.5.4.iso,if=none,media=cdrom,format=raw,id=drive-ide1-0-1 -device ide-drive,drive=drive-ide1-0-1,id=ide1-0-1,bus=ide.1,unit=1 -drive file=/usr/share/virtio-win/virtio-win-1.5.4.vfd,if=none,id=drive-fdc0-0-0,readonly=on,format=raw -global isa-fdc.driveA=drive-fdc0-0-0
2.do S3.
# pm-suspend
4.resume the guest.

Results:
1.after step 4, the guest will call trace, i will attach the detail log later.
...
ADDRCONF(NETDEV_UP): eth1: link is not ready
e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
nm-applet invoked oom-killer: gfp_mask=0x0, order=0, oom_adj=0, oom_score_adj=0
nm-applet cpuset=/ mems_allowed=0
Pid: 2306, comm: nm-applet Not tainted 2.6.32-348.el6.x86_64 #1
Call Trace:
 [<ffffffff810cb511>] ? cpuset_print_task_mems_allowed+0x91/0xb0
 [<ffffffff8111cc40>] ? dump_header+0x90/0x1b0
 [<ffffffff8121cd2c>] ? security_real_capable_noaudit+0x3c/0x70
 [<ffffffff8111d0c2>] ? oom_kill_process+0x82/0x2a0
 [<ffffffff8111cfbe>] ? select_bad_process+0x9e/0x120
 [<ffffffff8111d500>] ? out_of_memory+0x220/0x3c0
 [<ffffffff8111d765>] ? pagefault_out_of_memory+0xc5/0x110
 [<ffffffff810470d2>] ? mm_fault_error+0xb2/0x1a0
 [<ffffffff810476cb>] ? __do_page_fault+0x33b/0x480
 [<ffffffff81186544>] ? cp_new_stat+0xe4/0x100
 [<ffffffff8103c7b8>] ? pvclock_clocksource_read+0x58/0xd0
 [<ffffffff8103b8ac>] ? kvm_clock_read+0x1c/0x20
 [<ffffffff8103b8b9>] ? kvm_clock_get_cycles+0x9/0x10
 [<ffffffff810a1340>] ? getnstimeofday+0x60/0xf0
 [<ffffffff815128be>] ? do_page_fault+0x3e/0xa0
 [<ffffffff8150fc75>] ? page_fault+0x25/0x30
Mem-Info:
Node 0 DMA per-cpu:
CPU    0: hi:    0, btch:   1 usd:   0
CPU    1: hi:    0, btch:   1 usd:   0
Node 0 DMA32 per-cpu:
CPU    0: hi:  186, btch:  31 usd: 141
CPU    1: hi:  186, btch:  31 usd: 175
...

Comment 51 Sibiao Luo 2012-12-20 04:47:07 UTC
Created attachment 666515 [details]
the log of call trace when do S3 with x-data-plane=on.

Comment 52 Sibiao Luo 2012-12-20 05:04:45 UTC
(In reply to comment #47)
> Hi Stefan,
> 
>    The rhel6.4 guest call trace when resume the guest from S4 operation
> after hot-plug a virtio_blk x-data-plane=on data disk.
> Btw, if i do the same scenario without x-data-plane=on, the guest have no
> any call trace, it can resume successfully.

> Steps:
> 1.boot a rhel6.4 guest.
> 2.hot plug a x-data-plane=on virtio_blk data disk.
> (qemu) __com.redhat_drive_add
> file=/home/my-data-disk.raw,serial="QEMU-DISK2",id=drive-data-disk,
> format=raw,cache=none,aio=native,werror=stop,rerror=stop
> (qemu) device_add
> virtio-blk-pci,bus=pci.0,addr=0x6,scsi=off,x-data-plane=on,drive=drive-data-
> disk,id=data-disk
> 3.do S4.
> # pm_hibernate
> 4.resume the guest.
> <the same as step 1 cli>-drive
> file=/home/my-data-disk.raw,if=none,id=drive-data-disk,serial="QEMU-DISK2",
> format=raw,cache=none,aio=native,werror=stop,rerror=stop -device
> virtio-blk-pci,bus=pci.0,addr=0x6,scsi=off,x-data-plane=on,drive=drive-data-
> disk,id=data-disk

No need to do hot-plug, just do S4 with the x-data-plane=on to the virtio_blk system disk, then it will call trace when resume it.

Comment 57 Sibiao Luo 2013-01-15 08:06:31 UTC
Hi stefanha,

   I have separated the unfixed issue and new issue as following:
- rhel:
Bug 895316 - Fail to boot ide system disk with 232 virtio-blk x-data-plane=on data disk (multifunction=on)
Bug 895388 - guest call trace when resume it from S3 with x-data-plane=on to the virtio_blk system disk
Bug 895387 - guest call trace when resume it from S4 after hot-plug a virtio_blk x-data-plane=on disk
- windows:
Bug 895392 - fail to initialize the the data disk specified x-data-plane=on via 'Device Manager' in win7 64bit guest
Bug 895399 - Fail to boot win7 guest with x-data-plane=on for the system disk
Bug 895402 - Fail to install windows guest with 'Setup was unable to create a new system partiotion or locate an existing system partition' error

- New:
Bug 894995 - core dump when install windows guest with x-data-plane=on

Best Regards.
sluo

Comment 60 Sibiao Luo 2013-01-23 02:16:38 UTC
*** Bug 901493 has been marked as a duplicate of this bug. ***

Comment 61 errata-xmlrpc 2013-02-21 07:44:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0527.html


Note You need to log in before you can comment on or make changes to this bug.