Description of problem: boot on a host (RHEL 6.3) a guest machine (RHEL 6.2) that has 2 disks, tried to hot unplug one of those disks and got an error. Thread-22831::ERROR::2012-08-17 01:14:45,569::libvirtvm::1547::vm.Vm::(hotunplugDisk) vmId=`14ac86bf-4f74-4a27-a9f7-667df5e27488`::Hotunplug failed Traceback (most recent call last): File "/usr/share/vdsm/libvirtvm.py", line 1545, in hotunplugDisk self._dom.detachDevice(driveXml) File "/usr/share/vdsm/libvirtvm.py", line 487, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 704, in detachDevice if ret == -1: raise libvirtError ('virDomainDetachDevice() failed', dom=self) libvirtError: internal error unable to execute QEMU command 'device_del': Bus 'pci.0' does not support hotplugging Thread-22831::DEBUG::2012-08-17 01:14:45,652::BindingXMLRPC::879::vds::(wrapper) return vmHotunplugDisk with {'status': {'message': "internal error unable to execute QEMU command 'device_del': Bus 'pci.0' does not support hotplugging", 'code': 46}} the bug seems as a regression, the following cause error on version: qemu-kvm-rhev-0.12.1.2-2.304.el6.x86_64 but works just fine with: qemu-kvm-rhev-0.12.1.2-2.295.el6.x86_64 Version-Release number of selected component (if applicable): qemu-kvm-rhev-0.12.1.2-2.304.el6.x86_64 Steps to Reproduce: 1. create guest with 2 disks 2. install OS on host (RHEL) 3. while guest is running - try to hot unplug a disk. Actual results: getting the following error Expected results: successful unplug of the selected disk. Additional info: attached libvirt/qemu/vdsm logs.
Note that this worked fine for the same VM with only a different qemu version (both an older qemu-kvm-rhev and qemu-kvm). All else remained equal.
Hi, Liron kvm qe can not reproduce this issue via qemu-kvm command directly even lots hotunplug/hotplug repeatly, would you please answer the following questions? 1. reproducible rate ? 100%? 2. please attach the libvirt/qemu/vdsm logs 3. please provide the libvirt and vdsm version. Additional infos 1. I also have a general testing via rhelm, can not reproduce. #qemu-kvm-rhev-0.12.1.2-2.304.el6.x86_64 #libvirt-0.9.10-21.el6_3.3.x86_64 #vdsm-4.9.6-27.0.el6_3.x86_64 2. qemu-kvm reproduce steps 1.start a rhel6.2 guest /usr/libexec/qemu-kvm -M rhel6.2.0 -enable-kvm -m 2048 -smp 4,sockets=2,cores=2,threads=1 -name rhel6.3 -uuid 8842aa9e-2d20-4540-8557-4d04752a28d7 -drive file=rhel6.2.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,serial=f82002eb-520c-469b-90c2-663277e90437,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -spice disable-ticketing,port=5913 -vga qxl -drive file=first-disk.qcow2,if=none,id=drive-virtio-disk1,format=qcow2,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1 -monitor stdio -device sga -chardev socket,id=serial0,path=/var/test1,server,nowait -device isa-serial,chardev=serial0 -monitor unix:/var/mm,server,nowait -qmp tcp:0:6666,server,nowait 2. hot plug three virtio-disks with script i=1 while [ $i -lt 32 ] do echo "__com.redhat_drive_del drive-virtio-disk1" |nc -U /var/mm echo "device_del virtio-disk1" |nc -U /var/mm echo "__com.redhat_drive_del drive-virtio-disk2" |nc -U /var/mm echo "device_del virtio-disk2" |nc -U /var/mm echo "__com.redhat_drive_del drive-virtio-disk3" |nc -U /var/mm echo "device_del virtio-disk3" |nc -U /var/mm #sleep 3 echo "__com.redhat_drive_add file=first-disk.qcow2,id=drive-virtio-disk1,format=qcow2,cache=none" |nc -U /var/mm echo "device_add virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1"|nc -U /var/mm sleep 2 echo "__com.redhat_drive_add file=second-disk.qcow2,id=drive-virtio-disk2,format=qcow2,cache=none" |nc -U /var/mm echo "device_add virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk2,id=virtio-disk2"|nc -U /var/mm sleep 2 echo "__com.redhat_drive_add file=third-disk.qcow2,id=drive-virtio-disk3,format=qcow2,cache=none" |nc -U /var/mm echo "device_add virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk3,id=virtio-disk3"|nc -U /var/mm i=$(($i+1)) sleep 1 done 3. monitor terminal and system log monitor qmp: telnet 10.66.104.54 6666 {"execute": "qmp_capabilities"} monitor serial output nc -U /var/test1 monitor syslog in guest tail -f /var/log/messages
Created attachment 606173 [details] vdsm log
vdsm version : vdsm-4.10.0-0.313.git8bedc7e.el6.x86_64 libvirt version: libvirt-0.9.10-21.el6.x86_64 futher information - i tried this on two hosts, in one of them when i reinstalled the newer version (qemu-kvm-rhev-0.12.1.2-2.304.el6.x86_64) it did work after earlier it didn't - so i don't know what causes this error, but it seems to be some inconsistent behaviour that it's worth to check.
in one host it didn't work with either version of qemu-kvm-rhev (the attached log is from that host).
FYI Bug 807023 - libvirt does not check for successful device_del
Please provide /var/log/libvirt/qemu/<guest-name>.log for a failed test.