RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 595974 - users should get an error message when tired to use one drive for two or more devices via device_add command.
Summary: users should get an error message when tired to use one drive for two or mor...
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: beta
: 6.1
Assignee: Markus Armbruster
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 580953
TreeView+ depends on / blocked
 
Reported: 2010-05-26 03:04 UTC by juzhang
Modified: 2013-01-09 22:37 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-02-16 13:29:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description juzhang 2010-05-26 03:04:29 UTC
Description of problem:
boot guest with drive's id=test,then tried to add virtio-blk-pci device and drive=test via "(qemu) device_add virtio-blk-pci,drive=test" and added successful.however,I probably should get an error message rather than be able to use one drive for two devices.

Version-Release number of selected component (if applicable):
#uname -r
2.6.32-25.el6.x86_64
#rpm -q qemu-kvm
qemu-kvm-0.12.1.2-2.62.el6.x86_64

How reproducible:

Steps to Reproduce:
1.boot guest with drive's id=test 
/usr/libexec/qemu-kvm  -no-hpet -usbdevice tablet -rtc-td-hack -m 2G -smp 2 -drive file=/root/zhangjunyi/RHEL-Server-6.0-64-virtio.qcow2,if=virtio,boot=on,id=test,cache=none,werror=stop,rerror=stop -net nic,vlan=0,macaddr=22:11:22:45:66:22,model=virtio -net tap,vlan=0,script=/etc/qemu-ifup -uuid `uuidgen` -cpu qemu64,+sse2 -balloon none -boot c -monitor stdio -vnc :10
2.add virtio-blk-pci device and drive=test
(qemu) device_add virtio-blk-pci,drive=test
3.add another virtio-blk-pci device and drive=test
(qemu) device_add virtio-blk-pci,drive=test
 
Actual results:
after step3,In guest,fdisk -l, found /dev/vda,/dev/vdb,/dev/vdc.
/dev/vdb and /dev/vdc is as same as /dev/vda.in fact,just one image,that /dev/vda.

Expected results:
users should  get an error message when tired to use one drive for two or more devices via device_add command.one drive, on device.

Additional info:

Comment 2 RHEL Program Management 2010-05-28 10:36:03 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 3 Gerd Hoffmann 2010-05-31 10:13:08 UTC
Markus, wanna take this?  I think your blockdev patches cover this anyway ...

Comment 4 Markus Armbruster 2010-05-31 11:16:40 UTC
They do.  Taking the bug.

Comment 6 juzhang 2010-07-12 11:15:00 UTC
Tested on qemu-kvm-0.12.1.2-2.91.el6.x86_64.

1. boot guest
/usr/libexec/qemu-kvm -m 2G -smp 2 -drive file=/root/zhangjunyi/winxp_32.raw,if=none,id=test,boot=on,cache=none,format=raw -device ide-drive,drive=test -cpu qemu64,+sse2,+x2apic -monitor stdio -boot order=cdn,menu=on -netdev tap,id=hostnet0,vhost=on -device rtl8139,netdev=hostnet0,id=net0,mac=22:11:22:45:66:97 -vnc :10 -qmp tcp:0:4445,server,nowait

2. telnet the qmp server and issue qmp_capabilites
#{"execute":"qmp_capabilities"}

3.hot add driver,id=test1
#{"execute":"__com.redhat_drive_add", "arguments": {"file":"/root/zhangjunyi/test1.qcow2","format":"qcow2","id":"test1"}}

4.hot add device using test1 driver
#{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":"test1","id":"zhang"}}

5.hot add another device using same test1 driver
{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":"test1","id":"zhang2"}}

6.hot del drive zhang
{ "execute": "device_del", "arguments": { "id": "zhang"}}

7. hot del drive zhang2

Results:
After step7,qemu-kvm was abort because of Segmentation fault

gdb) bt
#0  bdrv_delete (bs=0x0) at block.c:652
#1  0x000000000040ec75 in drive_uninit (dinfo=0x1280730) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:2127
#2  0x000000000042c0f8 in virtio_blk_exit_pci (pci_dev=0x1b58300) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-pci.c:602
#3  0x00000000004211d0 in pci_unregister_device (dev=0x1b58300) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/pci.c:729
#4  0x000000000050ea51 in qdev_free (dev=0x1b58300) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/qdev.c:334
#5  0x000000000048fb01 in pciej_write (opaque=<value optimized out>, addr=<value optimized out>, val=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/acpi.c:752
#6  0x00000000004e1afc in ioport_write (addr=<value optimized out>, val=<value optimized out>) at ioport.c:80
#7  cpu_outl (addr=<value optimized out>, val=<value optimized out>) at ioport.c:210
#8  0x0000000000439288 in kvm_handle_io (env=0x1091f60) at /usr/src/debug/qemu-kvm-0.12.1.2/kvm-all.c:541
#9  kvm_run (env=0x1091f60) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:975
#10 0x00000000004393e1 in kvm_cpu_exec (env=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1658
#11 0x000000000043a551 in kvm_main_loop_cpu (_env=0x1091f60) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1900
#12 ap_main_loop (_env=0x1091f60) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:1950
#13 0x00000033d26077e1 in start_thread () from /lib64/libpthread.so.0
#14 0x00000033d1ee151d in clone () from /lib64/libc.so.6



Would you please tell me is same issue or consequence of this issue?if not same issue,please let me know,i will open a new bug.

Comment 7 Markus Armbruster 2010-11-16 18:26:34 UTC
I'm pretty sure it is.

If a device uses a drive, device_del deletes the drive along with the device.

In your test case, devices "zhang" and "zhang2" both use drive "test1".  That's bad, and this bz is about preventing it.

Step 6 deletes "zhang" and "test1".  This leaves a dangling drive reference in device "zhang2".

Step 7 deletes "zhang2" and (again) "test1".  The latter crashes.

Comment 8 juzhang 2010-11-17 02:00:21 UTC
(In reply to comment #7)
> I'm pretty sure it is.
> 
> If a device uses a drive, device_del deletes the drive along with the device.
> 
> In your test case, devices "zhang" and "zhang2" both use drive "test1".  That's
> bad, and this bz is about preventing it.
> 
> Step 6 deletes "zhang" and "test1".  This leaves a dangling drive reference in
> device "zhang2".
> 
> Step 7 deletes "zhang2" and (again) "test1".  The latter crashes.

Got it,Thanks for your confirmation

Comment 10 Markus Armbruster 2011-02-15 13:02:53 UTC
I believe this has been fixed by the patch for bug 654682, which went into qemu-kvm-0.12.1.2-2.142.el6.  Could you please verify?  Thanks!

Comment 11 juzhang 2011-02-16 06:37:47 UTC
(In reply to comment #10)
> I believe this has been fixed by the patch for bug 654682, which went into
> qemu-kvm-0.12.1.2-2.142.el6.  Could you please verify?  Thanks!

Tested on qemu-kvm-0.12.1.2-2.144.el6 using the steps as same as comment6.
After step5,
{"error": {"class": "PropertyValueInUse", "desc": "Property 'virtio-blk-pci.drive' can't take value 'test1', it's in use", "data": {"device": "virtio-blk-pci", "property": "drive", "value": "test1"}}}

Hi,Markus

  after step5,qemu-kvm prevent another device attach this driver,so,I think this bug has been fixed.what's your opinion?


Best Regards,
Junyi

Comment 12 Markus Armbruster 2011-02-16 07:59:19 UTC
Your test case looks good, and the error you get in step 5 is what I expect.


Note You need to log in before you can comment on or make changes to this bug.