RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 995530 - dataplane: refuse to start if device is already in use
Summary: dataplane: refuse to start if device is already in use
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-09 15:55 UTC by Stefan Hajnoczi
Modified: 2013-11-21 07:09 UTC (History)
8 users (show)

Fixed In Version: qemu-kvm-0.12.1.2-2.388.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-11-21 07:09:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:1553 0 normal SHIPPED_LIVE Important: qemu-kvm security, bug fix, and enhancement update 2013-11-20 21:40:29 UTC

Description Stefan Hajnoczi 2013-08-09 15:55:29 UTC
It should not be possible to hotplug virtio-blk-pci with x-data-plane=on if the -drive is currently in use by a block job or block migration.  Failure to check this could result in data corruption since both the block job and dataplane are accessing the image file without knowledge of each other.

Test case:

1. Start QEMU with -drive if=none,id=drive0,cache=none,format=raw,aio=native,file=test.img

2. The guest will be at the BIOS screen because there is no virtio-blk-pci device yet.  Now start a drive-mirror operation on the QEMU monitor: __com_redhat_drive-mirror drive0 destination.img

3. Hotplug device_add virtio-blk-pci,drive=virtio0,scsi=off,x-data-plane=on

QEMU should print an error message and refuse to hotplug with x-data-plane=on while the drive-mirror job is running.  This prevents data corruption.

Comment 1 Stefan Hajnoczi 2013-08-09 15:58:20 UTC
Brew:
http://brewweb.devel.redhat.com/brew/taskinfo?taskID=6150474

Patches posted to rhvirt-patches

Comment 8 Sibiao Luo 2013-08-27 05:53:19 UTC
Reproduce this issue on qemu-kvm-rhev-0.12.1.2-2.386.el6.x86_64.

host info:
# uname -r && rpm -q qemu-kvm-rhev
2.6.32-413.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.386.el6.x86_64
guest info:
win2012 64bit

e.g:...-drive file=windows_server_2012_x64.raw,if=none,id=drive-system-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop,serial="QEMU-DISK1"
(qemu) __com.redhat_drive-mirror drive-system-disk destination.image
Formatting 'destination.image', fmt=raw size=32212254720 
(qemu) device_add virtio-blk-pci,drive=drive-system-disk,scsi=off,x-data-plane=on,id=system-disk
qemu-kvm: /builddir/build/BUILD/qemu-kvm-0.12.1.2/block.c:4419: bdrv_set_in_use: Assertion `bs->in_use != in_use' failed.
Aborted (core dumped)
(gdb) bt
#0  0x00007fa7ca177925 in raise () from /lib64/libc.so.6
#1  0x00007fa7ca179105 in abort () from /lib64/libc.so.6
#2  0x00007fa7ca170a4e in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007fa7ca170b10 in __assert_fail () from /lib64/libc.so.6
#4  0x00007fa7cc87a096 in bdrv_set_in_use (bs=<value optimized out>, in_use=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/block.c:4419
#5  0x00007fa7cc8686e5 in virtio_blk_data_plane_create (vdev=0x7fa7ce6cbd80, blk=0x7fa7ce6cbc78, 
    dataplane=0x7fa7ce6cbe50) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/dataplane/virtio-blk.c:407
#6  0x00007fa7cc854e11 in virtio_blk_init (dev=0x7fa7ce6cb9f0, blk=0x7fa7ce6cbc78)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:675
#7  0x00007fa7cc85982e in virtio_blk_init_pci (pci_dev=0x7fa7ce6cb9f0)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:827
#8  0x00007fa7cc850ae6 in pci_qdev_init (qdev=0x7fa7ce6cb9f0, base=0x7fa7ccd3af40)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/pci.c:1528
#9  0x00007fa7cc8cf1a8 in qdev_init (dev=0x7fa7ce6cb9f0) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/qdev.c:284
#10 0x00007fa7cc8cf5bf in qdev_device_add (opts=0x7fa7ce644ef0) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/qdev.c:259
#11 0x00007fa7cc8cfbbb in do_device_add (mon=<value optimized out>, qdict=<value optimized out>, 
    ret_data=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/qdev.c:875
#12 0x00007fa7cc848850 in monitor_call_handler (mon=0x7fa7cf9c4d80, cmd=0x7fa7ccd33ae8, params=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4369
#13 0x00007fa7cc84dcdf in handle_user_command (mon=0x7fa7cf9c4d80, cmdline=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4406
#14 0x00007fa7cc84de17 in monitor_command_cb (mon=0x7fa7cf9c4d80, cmdline=<value optimized out>, 
    opaque=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:5044
#15 0x00007fa7cc8afcad in readline_handle_byte (rs=0x7fa7cf9fd0d0, ch=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/readline.c:369
#16 0x00007fa7cc84e085 in monitor_read (opaque=<value optimized out>, buf=0x7fffaab1f1b0 "\r\362\261\252\377\177", 
    size=1) at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:5030
#17 0x00007fa7cc8c620c in qemu_chr_be_write (chan=<value optimized out>, cond=<value optimized out>, 
    opaque=0x7fa7ce4eb880) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-char.c:192
#18 fd_chr_read (chan=<value optimized out>, cond=<value optimized out>, opaque=0x7fa7ce4eb880)
    at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-char.c:791
#19 0x00007fa7cbecbeb2 in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#20 0x00007fa7cc840d9a in glib_select_poll (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:3993
#21 main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4066
#22 0x00007fa7cc86388a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#23 0x00007fa7cc844728 in main_loop (argc=54, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4260
#24 main (argc=54, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6631
(gdb)

--------------------------------------------------------------------------

Verify this issue on qemu-kvm-rhev-0.12.1.2-2.398.el6.x86_64.

host info:
# uname -r && rpm -q qemu-kvm-rhev
2.6.32-413.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.398.el6.x86_64
guest info:
win2012 64bit.

# /usr/libexec/qemu-kvm -S -M rhel6.5.0 -cpu SandyBridge -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection -name sluo -uuid 43425b70-86e5-4664-bf2c-3b76699b8bec -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x3 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm.1,bus=virtio-serial0.0,id=port1,nr=1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm.2,bus=virtio-serial0.0,id=port2,nr=2 -drive file=windows_server_2012_x64.raw,if=none,id=drive-system-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop,serial="QEMU-DISK1" -device virtio-balloon-pci,id=ballooning,bus=pci.0,addr=0x5 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -netdev tap,id=hostnet0,vhost=off,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=2C:41:38:B6:40:21,bus=pci.0,addr=0x6,bootindex=2 -k en-us -boot menu=on -qmp tcp:0:4444,server,nowait -serial unix:/tmp/ttyS0,server,nowait -vnc :1 -spice port=5931,disable-ticketing -monitor stdio
(qemu) __com.redhat_drive-mirror drive-system-disk destination.image
Formatting 'destination.image', fmt=raw size=32212254720 
(qemu) device_add virtio-blk-pci,drive=drive-system-disk,scsi=off,x-data-plane=on,id=system-disk
cannot start dataplane thread while device is in use
Device 'virtio-blk-pci' could not be initialized
(qemu) 

Base on above, this issue has been fixed correctly, set it to verified status.

Best Regards,
sluo

Comment 9 errata-xmlrpc 2013-11-21 07:09:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-1553.html


Note You need to log in before you can comment on or make changes to this bug.