Bug 696596 - tight loop of attach-device/detach-device fails because libvirt not using drive_del correctly
Summary: tight loop of attach-device/detach-device fails because libvirt not using dri...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Eric Blake
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-04-14 11:23 UTC by Vivian Bian
Modified: 2013-01-09 23:47 UTC (History)
14 users (show)

Fixed In Version: libvirt-0.9.4-13.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-12-06 11:05:09 UTC
Target Upstream Version:


Attachments (Terms of Use)
reporter's libvirtd.log (1.06 MB, application/x-xz)
2011-09-20 09:33 UTC, Markus Armbruster
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:1513 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2011-12-06 01:23:30 UTC

Description Vivian Bian 2011-04-14 11:23:15 UTC
Description of problem:
Made the following loop for attach /dettach virtio disk to Xp guest
attach 26 virtio raw images to running xp guest
detach 26 virtio raw images from guest
try to attach 26 images again
will get following errors alike 

error: Failed to attach device from vmn.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk13' for drive

error: Failed to attach device from vmo.xml
error: internal error unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized

Version-Release number of selected component (if applicable):
libvirt-0.8.7-16.el6.x86_64
qemu-kvm-0.12.1.2-2.153.el6.x86_64
kernel-2.6.32-130.el6.x86_64


How reproducible:
always

Steps to Reproduce:
1.for i in {a..z}; do virsh attach-device xp-sp3-new vm$i.xml; done
2.for i in {a..z}; do virsh attach-device xp-sp3-new vm$i.xml; done
3.for i in {a..z}; do virsh attach-device xp-sp3-new vm$i.xml; done
  
Actual results:
error: Failed to attach device from vma.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk0' for drive

error: Failed to attach device from vmb.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk1' for drive

error: Failed to attach device from vmc.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk2' for drive

error: Failed to attach device from vmd.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk3' for drive

error: Failed to attach device from vme.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk4' for drive

error: Failed to attach device from vmf.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk5' for drive

error: Failed to attach device from vmg.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk6' for drive

error: Failed to attach device from vmh.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk7' for drive

error: Failed to attach device from vmi.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk8' for drive

error: Failed to attach device from vmj.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk9' for drive

error: Failed to attach device from vmk.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk10' for drive

error: Failed to attach device from vml.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk11' for drive

error: Failed to attach device from vmm.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk12' for drive

error: Failed to attach device from vmn.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk13' for drive

error: Failed to attach device from vmo.xml
error: internal error unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized

error: Failed to attach device from vmp.xml
error: internal error unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized

Device attached successfully

Device attached successfully

Device attached successfully

Device attached successfully

Device attached successfully

Device attached successfully

Device attached successfully

Device attached successfully

Device attached successfully

Device attached successfully


Expected results:


Additional info:

    <disk type='file' device='disk'>
          <source file='/var/lib/libvirt/images/vma.img'/>
                <target dev='vda' bus='virtio'/>
                    </disk>

Comment 1 Vivian Bian 2011-04-14 11:24:11 UTC
get the libvirtd.log from ftp://10.66.4.220/pub since the log is 36M

Comment 2 Vivian Bian 2011-04-14 13:16:17 UTC
get the libvirtd.log from ftp://10.66.4.220/pub since the log is 36M

Comment 3 Vivian Bian 2011-04-15 01:48:43 UTC
> 
> Steps to Reproduce:
> 1.for i in {a..z}; do virsh attach-device xp-sp3-new vm$i.xml; done
> 2.for i in {a..z}; do virsh attach-device xp-sp3-new vm$i.xml; done
 
  The second step should be 
  for i in {a..z}; do virsh detach-device xp-sp3-new vm$i.xml; done
  
> 3.for i in {a..z}; do virsh attach-device xp-sp3-new vm$i.xml; done

Comment 4 Dave Allan 2011-06-17 18:23:03 UTC
I believe this error is being returned by qemu, so reassigning.  Please return it to libvirt if that's incorrect.

Comment 5 Dor Laor 2011-06-19 08:01:45 UTC
Won't we wait for event before the hot unplug now?

Comment 6 Markus Armbruster 2011-09-20 09:16:22 UTC
libvirtd.log from comment#1 has the following entries:

19:06:46.341: 27873: debug : qemuMonitorJSONCommandWithFd:218 : Send command '{"execute":"__com.redhat_drive_add","arguments":{"file":"/var/lib/libvirt/images/vmn.img","id":"drive-virtio-disk13","format":"raw"}}' for write with FD -1
19:06:46.344: 27873: debug : qemuMonitorJSONCommandWithFd:218 : Send command '{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","bus":"pci.0","addr":"0x13","drive":"drive-virtio-disk13","id":"virtio-disk13"}}' for write with FD -1
19:06:59.894: 27873: debug : qemuMonitorJSONCommandWithFd:218 : Send command '{"execute":"device_del","arguments":{"id":"virtio-disk13"}}' for write with FD -1
19:06:59.987: 27873: debug : qemuMonitorJSONCommandWithFd:218 : Send command '{"execute":"drive_del","arguments":{"id":"drive-virtio-disk13"}}' for write with FD -1
19:06:59.989: 27868: debug : qemuMonitorJSONIOProcessLine:116 : Line [{"error": {"class": "CommandNotFound", "desc": "The command drive_del has not been found", "data": {"name": "drive_del"}}}]
19:07:06.761: 27871: debug : qemuMonitorJSONCommandWithFd:218 : Send command '{"execute":"__com.redhat_drive_add","arguments":{"file":"/var/lib/libvirt/images/vmn.img","id":"drive-virtio-disk13","format":"raw"}}' for write with FD -1
19:07:06.763: 27868: debug : qemuMonitorJSONIOProcessLine:116 : Line [{"error": {"class": "DuplicateId", "desc": "Duplicate ID 'drive-virtio-disk13' for drive", "data": {"object": "drive", "id": "drive-virtio-disk13"}}}]

Translated into readable form:

--> {"execute":"__com.redhat_drive_add",
     "arguments":{"file":"/var/lib/libvirt/images/vmn.img",
                  "id":"drive-virtio-disk13","format":"raw"}}
--> {"execute":"device_add",
     "arguments":{"driver":"virtio-blk-pci",
                  "bus":"pci.0",
                  "addr":"0x13",
                  "drive":"drive-virtio-disk13",
                  "id":"virtio-disk13"}}
--> {"execute":"device_del",
               "arguments":{"id":"virtio-disk13"}}
--> {"execute":"drive_del",
               "arguments":{"id":"drive-virtio-disk13"}}
<-- {"error": {"class": "CommandNotFound",
               "desc": "The command drive_del has not been found",
               "data": {"name": "drive_del"}}}
--> {"execute":"__com.redhat_drive_add",
     "arguments":{"file":"/var/lib/libvirt/images/vmn.img",
                  "id":"drive-virtio-disk13","format":"raw"}}
<-- {"error": {"class": "DuplicateId",
               "desc": "Duplicate ID 'drive-virtio-disk13' for drive",
               "data": {"object": "drive",
	                "id": "drive-virtio-disk13"}}}

libvirt attempts to delete drive-virtio-disk13.  But it gets the command wrong, so that doesn't work.

"Fortunately", the drive gets automatically deleted when unplug of virtio-disk13 completes.  Happens at an unpredictable time after device_del.

Thus, we have a race condition: if unplug completes quickly enough, the drive is gone by the time we add another one with the same ID, and all is well.  Else, the old drive still exists, and we get a DuplicateId error.  This is what the log shows.

I suspect libvirt needs to use __com.redhat_drive_del instead.  See qemu-kvm-rhel6 commit 95687dc6 for why we had to name it that way.

Reassigning back to libvirt.

Comment 7 Markus Armbruster 2011-09-20 09:33:49 UTC
Created attachment 523993 [details]
reporter's libvirtd.log

Same log as in comment#1, but attached for more convenient access.

Comment 8 Eric Blake 2011-09-23 22:03:28 UTC
Solution is RHEL-specific (no upstream counterpart).

In POST:
http://post-office.corp.redhat.com/archives/rhvirt-patches/2011-September/msg00801.html

Comment 11 yanbing du 2011-09-26 09:04:56 UTC
Test with libvirt-0.9.4-13.el6.x86_64,loop attach and detach 26 disks more than 3 times, all successfully.
Move to VRTIFIED.

Comment 12 errata-xmlrpc 2011-12-06 11:05:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1513.html


Note You need to log in before you can comment on or make changes to this bug.