Bug 1531155 - Hot plug fails - internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized
Summary: Hot plug fails - internal error: unable to execute QEMU command '__com.redhat...
Keywords:
Status: CLOSED DUPLICATE of bug 1469235
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.0.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.2.1
: ---
Assignee: Daniel Erez
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-04 16:59 UTC by Avihai
Modified: 2018-01-07 13:55 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-01-07 13:55:07 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.2+
rule-engine: blocker+


Attachments (Terms of Use)
engine , vdsm ,libvirt logs (3.48 MB, application/x-gzip)
2018-01-04 16:59 UTC, Avihai
no flags Details

Description Avihai 2018-01-04 16:59:08 UTC
Created attachment 1377023 [details]
engine , vdsm ,libvirt logs

Description of problem:
Try to add IDE disk to a running VM -> Hot plug IDE disk fails - internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized. 

From this point onwards the the CPU on the host containing this VM is 98%.

VDSM log :
2018-01-04 18:23:18,481+0200 INFO  (jsonrpc/1) [virt.vm] (vmId='3139a307-f95e-4979-b70a-2be214fb8efc') Hotplug disk xml: <?xml version='1.0' encoding='UTF-8'?>
<disk address="" device="disk" snapshot="no" type="block">
    <source dev="/rhev/data-center/mnt/blockSD/dfa58e5a-bbae-4adb-9823-6b4b8ddad7d2/images/39fea2f0-08ed-462d-aac2-a41d6a28e971/c398a006-dcf6-4f81-9d30-2fb507697fa1" />
    <target bus="virtio" dev="vdc" />
    <serial>39fea2f0-08ed-462d-aac2-a41d6a28e971</serial>
    <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2" />
</disk>
 (vm:3638)
2018-01-04 18:23:18,626+0200 ERROR (jsonrpc/1) [virt.vm] (vmId='3139a307-f95e-4979-b70a-2be214fb8efc') Hotplug failed (vm:3646)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3644, in hotplugDisk
    self._dom.attachDevice(driveXml)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 126, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 512, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 570, in attachDevice
    if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized

Libvirtd.log:
2018-01-04 16:23:18.582+0000: 21456: info : qemuMonitorSend:1064 : QEMU_MONITOR_SEND_MSG: mon=0x7ff9bc016070 msg={"execute":"__com.redhat_drive_add","arguments":{"file":"/rhev/data-center/mnt/blockSD/dfa58e5a-bb
ae-4adb-9823-6b4b8ddad7d2/images/39fea2f0-08ed-462d-aac2-a41d6a28e971/c398a006-dcf6-4f81-9d30-2fb507697fa1","format":"qcow2","id":"drive-virtio-disk2","serial":"39fea2f0-08ed-462d-aac2-a41d6a28e971","cache":"non
e","werror":"stop","rerror":"stop","aio":"native"},"id":"libvirt-1412"}
 fd=-1
2018-01-04 16:23:18.582+0000: 21445: info : virEventPollDispatchHandles:506 : EVENT_POLL_DISPATCH_HANDLE: watch=1 events=1
2018-01-04 16:23:18.582+0000: 21445: info : virEventPollRunOnce:640 : EVENT_POLL_RUN: nhandles=15 timeout=-1
2018-01-04 16:23:18.582+0000: 21445: info : virEventPollDispatchHandles:506 : EVENT_POLL_DISPATCH_HANDLE: watch=24 events=2
2018-01-04 16:23:18.582+0000: 21445: info : virObjectRef:388 : OBJECT_REF: obj=0x7ff9bc016070
2018-01-04 16:23:18.582+0000: 21445: info : qemuMonitorIOWrite:544 : QEMU_MONITOR_IO_WRITE: mon=0x7ff9bc016070 buf={"execute":"__com.redhat_drive_add","arguments":{"file":"/rhev/data-center/mnt/blockSD/dfa58e5a-
bbae-4adb-9823-6b4b8ddad7d2/images/39fea2f0-08ed-462d-aac2-a41d6a28e971/c398a006-dcf6-4f81-9d30-2fb507697fa1","format":"qcow2","id":"drive-virtio-disk2","serial":"39fea2f0-08ed-462d-aac2-a41d6a28e971","cache":"n
one","werror":"stop","rerror":"stop","aio":"native"},"id":"libvirt-1412"}
 len=382 ret=382 errno=0
2018-01-04 16:23:18.582+0000: 21445: info : virEventPollUpdateHandle:152 : EVENT_POLL_UPDATE_HANDLE: watch=24 events=13
2018-01-04 16:23:18.582+0000: 21445: info : virObjectUnref:350 : OBJECT_UNREF: obj=0x7ff9bc016070
2018-01-04 16:23:18.582+0000: 21445: info : virEventPollRunOnce:640 : EVENT_POLL_RUN: nhandles=15 timeout=-1
2018-01-04 16:23:18.605+0000: 21445: info : virEventPollDispatchHandles:506 : EVENT_POLL_DISPATCH_HANDLE: watch=24 events=1
2018-01-04 16:23:18.605+0000: 21445: info : virObjectRef:388 : OBJECT_REF: obj=0x7ff9bc016070
2018-01-04 16:23:18.605+0000: 21445: info : qemuMonitorIOProcess:439 : QEMU_MONITOR_IO_PROCESS: mon=0x7ff9bc016070 buf={"id": "libvirt-1412", "error": {"class": "GenericError", "desc": "Device 'drive-virtio-disk2' could not be initialized"}}
 len=124
2018-01-04 16:23:18.606+0000: 21445: info : qemuMonitorJSONIOProcessLine:213 : QEMU_MONITOR_RECV_REPLY: mon=0x7ff9bc016070 reply={"id": "libvirt-1412", "error": {"class": "GenericError", "desc": "Device 'drive-virtio-disk2' could not be initialized"}}
2018-01-04 16:23:18.606+0000: 21445: info : virEventPollUpdateHandle:152 : EVENT_POLL_UPDATE_HANDLE: watch=24 events=13
2018-01-04 16:23:18.606+0000: 21445: info : virObjectUnref:350 : OBJECT_UNREF: obj=0x7ff9bc016070
2018-01-04 16:23:18.606+0000: 21445: info : virEventPollRunOnce:640 : EVENT_POLL_RUN: nhandles=15 timeout=-1
2018-01-04 16:23:18.606+0000: 21456: info : virEventPollUpdateHandle:152 : EVENT_POLL_UPDATE_HANDLE: watch=24 events=13
2018-01-04 16:23:18.606+0000: 21456: error : qemuMonitorJSONCheckError:389 : internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized


Engine:
2018-01-04 18:23:19,240+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] Failed in 'HotPlugDiskVDS' 
method
2018-01-04 18:23:19,281+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] EVENT_ID: VDS_BROKER_C
OMMAND_FAILURE(10,802), VDSM host_mixed_1 command HotPlugDiskVDS failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized
2018-01-04 18:23:19,282+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] Command 'org.ovirt.engine.c
ore.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return value 'StatusOnlyReturn [status=Status [code=45, message=internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2'
 could not be initialized]]'
2018-01-04 18:23:19,282+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] HostName = host_mixed_1
2018-01-04 18:23:19,282+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] Command 'HotPlugDiskVDSComm
and(HostName = host_mixed_1, HotPlugDiskVDSParameters:{hostId='120a4b26-b8e5-412c-8f72-086259005be6', vmId='3139a307-f95e-4979-b70a-2be214fb8efc', diskId='39fea2f0-08ed-462d-aac2-a41d6a28e971', addressMap='null'
})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be i
nitialized, code = 45
2018-01-04 18:23:19,282+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] FINISH, HotPlugDiskVDSComma
nd, log id: 5323f2ea
2018-01-04 18:23:19,283+02 ERROR [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] Command 'org.ovirt.engine.cor
e.bll.storage.disk.HotPlugDiskToVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal
 error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized, code = 45 (Failed with error FailedToPlugDisk and code 45)
2018-01-04 18:23:19,334+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] EVENT_ID: USER_FAILED_
HOTPLUG_DISK(2,001), Failed to plug disk 100G_hot_plug to VM golden_env_mixed_virtio_1_0 (User: admin@internal-authz).
2018-01-04 18:23:19,335+02 INFO  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] Lock freed to object 'EngineL
ock:{exclusiveLocks='[39fea2f0-08ed-462d-aac2-a41d6a28e971=DISK]', sharedLocks='[3139a307-f95e-4979-b70a-2be214fb8efc=VM]'}'
2018-01-04 18:23:19,335+02 INFO  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] Running command: HotPlugDiskT
oVmCommand internal: false. Entities affected :  ID: 3139a307-f95e-4979-b70a-2be214fb8efc Type: VMAction group CONFIGURE_VM_STORAGE with role type USER
2018-01-04 18:23:19,389+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] START, HotPlugDiskVDSCommand(HostName = host_mixed_1, HotPlugDiskVDSParameters:{hostId='120a4b26-b8e5-412c-8f72-086259005be6', vmId='3139a307-f95e-4979-b70a-2be214fb8efc', diskId='46a44352-f8e8-4e82-91dc-9aab69eb7474', addressMap='[bus=0, controller=0, unit=2, type=drive, target=0]'}), log id: 35bba9b8
2018-01-04 18:23:21,582+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] Failed in 'HotPlugDiskVDS' method
2018-01-04 18:23:21,609+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host_mixed_1 command HotPlugDiskVDS failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-2' could not be initialized
2018-01-04 18:23:21,609+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return value 'StatusOnlyReturn [status=Status [code=45, message=internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-2' could not be initialized]]'
2018-01-04 18:23:21,609+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] HostName = host_mixed_1
2018-01-04 18:23:21,610+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-33440) [b04e05a0-092e-4add-8312-b8f27e9f3f8a] Command 'HotPlugDiskVDSCommand(HostName = host_mixed_1, HotPlugDiskVDSParameters:{hostId='120a4b26-b8e5-412c-8f72-086259005be6', vmId='3139a307-f95e-4979-b70a-2be214fb8efc', diskId='46a44352-f8e8-4e82-91dc-9aab69eb7474', addressMap='[bus=0, controller=0, unit=2, type=drive, target=0]'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-scsi0-0-0-2' could not be initialized, code = 45


Version-Release number of selected component (if applicable):
ovirt-engine-4.2.0.2-0.1.el7.noarch
vdsm -> 4.20.9.3-1
libvirt -> 3.9.0-6
qemu-guest-agent-2.8.0-2

How reproducible:
Happened once so far


Steps to Reproduce:
1.Create a VM with OS disk (create VM from a template) & run it.
2.create a 10G IDE disk
3.attach the disk to the VM

Actual results:
Try to add IDE disk to a running VM -> Hot plug IDE disk fails - internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized


Expected results:


Additional info:

Issue did not occur when VM is down , do it looks like a hot plug issue .

CPU of the host that hold that VM is 98% on qemu:
Tasks: 218 total,   2 running, 216 sleeping,   0 stopped,   0 zombie
%Cpu(s): 96.0 us,  2.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  1.7 si,  0.0 st
KiB Mem :  3879240 total,   942880 free,  1042044 used,  1894316 buff/cache
KiB Swap:  2097148 total,  2097148 free,        0 used.  2447804 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND                                                                                                                                         
32630 qemu      20   0 1743528 617124  14536 S 94.7 15.9  20:13.53 qemu-kvm         

strace on qemu process shows many polling events :
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"e", 1}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 1
ppoll([{fd=6, events=POLLIN}, {fd=8, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN}, {fd=18, events=POLLIN}, {fd=20, events=POLLIN}, {fd=23, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=32, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=40, events=POLLIN}, {fd=41, events=POLLIN}, {fd=42, events=POLLIN}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}, {fd=49, events=POLLIN}, {fd=50, events=POLLIN}, {fd=51, events=POLLIN}, {fd=52, events=POLLIN}, {fd=53, events=POLLIN}, {fd=54, events=POLLIN}, {fd=55, events=POLLIN}, {fd=56, events=POLLIN}, {fd=57, events=POLLIN}, {fd=58, events=POLLIN}, {fd=59, events=POLLIN}, {fd=60, events=POLLIN}, {fd=61, events=POLLIN}, {fd=62, events=POLLIN}, {fd=63, events=POLLIN}, ...], 52, {1, 402457334}, NULL, 8) = 1 ([{fd=32, revents=POLLIN}], left {1, 402414793})
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\"", 1}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 1
ppoll([{fd=6, events=POLLIN}, {fd=8, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN}, {fd=18, events=POLLIN}, {fd=20, events=POLLIN}, {fd=23, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=32, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=40, events=POLLIN}, {fd=41, events=POLLIN}, {fd=42, events=POLLIN}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}, {fd=49, events=POLLIN}, {fd=50, events=POLLIN}, {fd=51, events=POLLIN}, {fd=52, events=POLLIN}, {fd=53, events=POLLIN}, {fd=54, events=POLLIN}, {fd=55, events=POLLIN}, {fd=56, events=POLLIN}, {fd=57, events=POLLIN}, {fd=58, events=POLLIN}, {fd=59, events=POLLIN}, {fd=60, events=POLLIN}, {fd=61, events=POLLIN}, {fd=62, events=POLLIN}, {fd=63, events=POLLIN}, ...], 52, {1, 358697838}, NULL, 8) = 1 ([{fd=32, revents=POLLIN}], left {1, 358665337})
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{":", 1}], msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_CMSG_CLOEXEC) = 1
ppoll([{fd=6, events=POLLIN}, {fd=8, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN}, {fd=18, events=POLLIN}, {fd=20, events=POLLIN}, {fd=23, events=POLLIN}, {fd=24, events=POLLIN}, {fd=25, events=POLLIN}, {fd=32, events=POLLIN}, {fd=34, events=POLLIN}, {fd=35, events=POLLIN}, {fd=40, events=POLLIN}, {fd=41, events=POLLIN}, {fd=42, events=POLLIN}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}, {fd=49, events=POLLIN}, {fd=50, events=POLLIN}, {fd=51, events=POLLIN}, {fd=52, events=POLLIN}, {fd=53, events=POLLIN}, {fd=54, events=POLLIN}, {fd=55, events=POLLIN}, {fd=56, events=POLLIN}, {fd=57, events=POLLIN}, {fd=58, events=POLLIN}, {fd=59, events=POLLIN}, {fd=60, events=POLLIN}, {fd=61, events=POLLIN}, {fd=62, events=POLLIN}, {fd=63, events=POLLIN}, ...], 52, {1, 37193021}, NULL, 8^Cstrace: Process 32630 detached
 <detached ...>

Comment 1 Avihai 2018-01-04 17:00:09 UTC
Regression flag set as this issue did not occur in 4.1. ( 4.1.7.5-0.1)

Comment 2 Allon Mureinik 2018-01-04 22:53:20 UTC
(In reply to Avihai from comment #1)
> Regression flag set as this issue did not occur in 4.1. ( 4.1.7.5-0.1)
With what kvm/libvirt versions?

Comment 3 Allon Mureinik 2018-01-04 22:54:40 UTC
Daniel, isn't this a dup of something that's already on your radar?

Comment 4 Red Hat Bugzilla Rules Engine 2018-01-04 22:54:43 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 5 Allon Mureinik 2018-01-04 22:58:04 UTC
Avihai, the log you shared points to the fact this disk has a VirtIO, not an IDE interface:

b70a-2be214fb8efc') Hotplug disk xml: <?xml version='1.0' encoding='UTF-8'?>
<disk address="" device="disk" snapshot="no" type="block">
    <source dev="/rhev/data-center/mnt/blockSD/dfa58e5a-bbae-4adb-9823-6b4b8ddad7d2/images/39fea2f0-08ed-462d-aac2-a41d6a28e971/c398a006-dcf6-4f81-9d30-2fb507697fa1" />
    <target bus="virtio" dev="vdc" /> <!-- Here! -->
    <serial>39fea2f0-08ed-462d-aac2-a41d6a28e971</serial>
    <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2" />
</disk>

Is this the wrong log, or is the title of the bug wrong?
Or am I missing something?

Comment 6 Yaniv Kaul 2018-01-07 08:36:26 UTC
I didn't even know we could hot-plug IDE. Shouldn't we block this?

Comment 7 Daniel Erez 2018-01-07 08:53:02 UTC
(In reply to Yaniv Kaul from comment #6)
> I didn't even know we could hot-plug IDE. Shouldn't we block this?

Hot-plug for ide is indeed not supported and blocked.
According to the logs [1], the disk is attached with virtio interface.
So it seems like a duplicate of bug 1469235 ?

[1]
<disk address="" device="disk" snapshot="no" type="block">
    <source dev="/rhev/data-center/mnt/blockSD/dfa58e5a-bbae-4adb-9823-6b4b8ddad7d2/images/39fea2f0-08ed-462d-aac2-a41d6a28e971/c398a006-dcf6-4f81-9d30-2fb507697fa1" />
    <target bus="virtio" dev="vdc" />
    <serial>39fea2f0-08ed-462d-aac2-a41d6a28e971</serial>
    <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2" />

Comment 8 Avihai 2018-01-07 11:35:00 UTC
Rechecked the logs, and the adding of IDE came last in the scenario so I thought the issue was with it , I was wrong .

Additional tests show that this is not Related to IDE but just adding another disk to the hotplug (4 disks) make it fail !

In short, it indeed looks like a dup of bug 1469235 .

Additional tests details:
- Have an Existing VM + OS disk up & running.

1) 100G disk - 1 preallocated - Alone hot plug/unplug works
2) IDE + 100G disk  + 1 preallocated  100G ->  hot plug/unplug - works 
3)  IDE + 100G disk preallocated + 100G disk  thin-> hotplug fails !
4) Non-IDE + 100G disk preallocated + 100G disk  thin-> hotplug  fails !

Comment 9 Daniel Erez 2018-01-07 13:55:07 UTC
OK, Thanks! Marking as a duplicate of bug 1469235 then.

*** This bug has been marked as a duplicate of bug 1469235 ***


Note You need to log in before you can comment on or make changes to this bug.