RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 844408 - after failed hotplug qemu keeps the file descriptor open
Summary: after failed hotplug qemu keeps the file descriptor open
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.3
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard: storage
Depends On:
Blocks: 859376
TreeView+ depends on / blocked
 
Reported: 2012-07-30 14:00 UTC by Haim
Modified: 2014-01-13 00:53 UTC (History)
25 users (show)

Fixed In Version: libvirt-0.10.2-1.el6
Doc Type: Bug Fix
Doc Text:
Disk hot plug is a two-part action: the qemuMonitorAddDrive() call is followed by the qemuMonitorAddDevice() call. When the first part succeeded but the second one failed, libvirt failed to roll back the first part and the device remained in use even though the disk hot plug failed. With this update, the rollback for the drive addition is properly performed in the described scenario and disk hot plug now works as expected.
Clone Of:
Environment:
Last Closed: 2013-02-21 07:20:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vdsm log (1.51 MB, application/x-gzip)
2012-07-30 14:03 UTC, Haim
no flags Details
failed attach vdsm, libvirt, qemu logs. (1.41 MB, application/x-gzip)
2012-08-02 09:27 UTC, Eduardo Warszawski
no flags Details
libvirtd.log with filter qemu of debug log (75.30 KB, text/plain)
2012-09-26 02:43 UTC, weizhang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:0276 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2013-02-20 21:18:26 UTC

Description Haim 2012-07-30 14:00:20 UTC
Description of problem:

activate (hot-plug) is not working after disk unplug due to failure in vdsm (prepareImage) to run lvchange:

Thread-13746::DEBUG::2012-07-30 20:04:11,943::BindingXMLRPC::864::vds::(wrapper) client [10.35.97.14]::call vmHotplugDisk with ({'vmId': '3f6cb042-2be6-4c81-8429-56e6d803f438', 'drive': {'iface': 'virtio', 'format': 'raw', 'optional': 'false', 'poolID': 'b8423a1f-8889-469a-b2fa-39ab78ac3a57', 'volumeID': '5c99c73c-0b11-4213-9ff5-20817d201f1e', 'imageID': '74512746-e10f-4416-a384-baa29d92cde5', 'readonly': 'false', 'domainID': '4f38a996-dbeb-4981-885b-742b46a4714f', 'address': {'bus': '0x00', ' slot': '0x05', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}},) {} flowID [50a78f52]
Thread-13746::DEBUG::2012-07-30 20:04:11,945::task::588::TaskManager.Task::(_updateState) Task=`1813ef09-bbb3-4dad-af69-6e716ade65c4`::moving from state init -> state preparing
Thread-13746::INFO::2012-07-30 20:04:11,946::logUtils::37::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='4f38a996-dbeb-4981-885b-742b46a4714f', spUUID='b8423a1f-8889-469a-b2fa-39ab78ac3a57', imgUUID='74512746-e10f-4416-a384-baa29d92cde5', volUUID='5c99c73c-0b11-4213-9ff5-20817d201f1e')
Thread-13746::DEBUG::2012-07-30 20:04:11,947::resourceManager::175::ResourceManager.Request::(__init__) ResName=`Storage.4f38a996-dbeb-4981-885b-742b46a4714f`ReqID=`fec7ac3f-a876-435f-b3a2-bf0cc63a5f88`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource'
Thread-13746::DEBUG::2012-07-30 20:04:11,948::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' for lock type 'shared'
Thread-13746::DEBUG::2012-07-30 20:04:11,948::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' is free. Now locking as 'shared' (1 active user)
Thread-13746::DEBUG::2012-07-30 20:04:11,949::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.4f38a996-dbeb-4981-885b-742b46a4714f`ReqID=`fec7ac3f-a876-435f-b3a2-bf0cc63a5f88`::Granted request
Thread-13746::DEBUG::2012-07-30 20:04:11,950::task::817::TaskManager.Task::(resourceAcquired) Task=`1813ef09-bbb3-4dad-af69-6e716ade65c4`::_resourcesAcquired: Storage.4f38a996-dbeb-4981-885b-742b46a4714f (shared)
Thread-13746::DEBUG::2012-07-30 20:04:11,951::task::978::TaskManager.Task::(_decref) Task=`1813ef09-bbb3-4dad-af69-6e716ade65c4`::ref 1 aborting False
Thread-13746::DEBUG::2012-07-30 20:04:11,952::lvm::393::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-13746::DEBUG::2012-07-30 20:04:11,953::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1meita1343565|3514f0c5d6620000c|3514f0c5d6620026f|3514f0c5d66200270|3514f0c5d66200271|3514f0c5d66200272|3514f0c5d66200273%\\", \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 4f38a996-dbeb-4981-885b-742b46a4714f' (cwd None)
Thread-13746::DEBUG::2012-07-30 20:04:12,201::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 12884836352: Input/output error\n  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 12884893696: Input/output error\n  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 0: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/1meita1343565 was disabled\n'; <rc> = 0
Thread-13746::DEBUG::2012-07-30 20:04:12,213::lvm::422::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-13746::DEBUG::2012-07-30 20:04:12,214::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/dd iflag=direct skip=5 bs=512 if=/dev/4f38a996-dbeb-4981-885b-742b46a4714f/metadata count=1' (cwd None)
Thread-13746::DEBUG::2012-07-30 20:04:12,224::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00046586 s, 1.1 MB/s\n'; <rc> = 0
Thread-13746::DEBUG::2012-07-30 20:04:12,224::misc::334::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.00046586 s, 1.1 MB/s'], size: 512
Thread-13746::INFO::2012-07-30 20:04:12,225::image::319::Storage.Image::(getChain) sdUUID=4f38a996-dbeb-4981-885b-742b46a4714f imgUUID=74512746-e10f-4416-a384-baa29d92cde5 chain=[<storage.blockVolume.BlockVolume object at 0x7f5cb837da10>] 
Thread-13746::INFO::2012-07-30 20:04:12,226::logUtils::39::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'path': '/rhev/data-center/b8423a1f-8889-469a-b2fa-39ab78ac3a57/4f38a996-dbeb-4981-885b-742b46a4714f/images/74512746-e10f-4416-a384-baa29d92cde5/5c99c73c-0b11-4213-9ff5-20817d201f1e', 'chain': [{'path': '/rhev/data-center/b8423a1f-8889-469a-b2fa-39ab78ac3a57/4f38a996-dbeb-4981-885b-742b46a4714f/images/74512746-e10f-4416-a384-baa29d92cde5/5c99c73c-0b11-4213-9ff5-20817d201f1e', 'domainID': '4f38a996-dbeb-4981-885b-742b46a4714f', 'volumeID': '5c99c73c-0b11-4213-9ff5-20817d201f1e', 'imageID': '74512746-e10f-4416-a384-baa29d92cde5'}]}
Thread-13746::DEBUG::2012-07-30 20:04:12,226::task::1172::TaskManager.Task::(prepare) Task=`1813ef09-bbb3-4dad-af69-6e716ade65c4`::finished: {'path': '/rhev/data-center/b8423a1f-8889-469a-b2fa-39ab78ac3a57/4f38a996-dbeb-4981-885b-742b46a4714f/images/74512746-e10f-4416-a384-baa29d92cde5/5c99c73c-0b11-4213-9ff5-20817d201f1e', 'chain': [{'path': '/rhev/data-center/b8423a1f-8889-469a-b2fa-39ab78ac3a57/4f38a996-dbeb-4981-885b-742b46a4714f/images/74512746-e10f-4416-a384-baa29d92cde5/5c99c73c-0b11-4213-9ff5-20817d201f1e', 'domainID': '4f38a996-dbeb-4981-885b-742b46a4714f', 'volumeID': '5c99c73c-0b11-4213-9ff5-20817d201f1e', 'imageID': '74512746-e10f-4416-a384-baa29d92cde5'}]}
Thread-13746::DEBUG::2012-07-30 20:04:12,227::task::588::TaskManager.Task::(_updateState) Task=`1813ef09-bbb3-4dad-af69-6e716ade65c4`::moving from state preparing -> state finished
Thread-13746::DEBUG::2012-07-30 20:04:12,227::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.4f38a996-dbeb-4981-885b-742b46a4714f': < ResourceRef 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f', isValid: 'True' obj: 'None'>}
Thread-13746::DEBUG::2012-07-30 20:04:12,228::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-13746::DEBUG::2012-07-30 20:04:12,228::resourceManager::538::ResourceManager::(releaseResource) Trying to release resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f'
Thread-13746::DEBUG::2012-07-30 20:04:12,229::resourceManager::553::ResourceManager::(releaseResource) Released resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' (0 active users)
Thread-13746::DEBUG::2012-07-30 20:04:12,229::resourceManager::558::ResourceManager::(releaseResource) Resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' is free, finding out if anyone is waiting for it.
Thread-13746::DEBUG::2012-07-30 20:04:12,230::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f', Clearing records.
Thread-13746::DEBUG::2012-07-30 20:04:12,230::task::978::TaskManager.Task::(_decref) Task=`1813ef09-bbb3-4dad-af69-6e716ade65c4`::ref 0 aborting False
Thread-13746::INFO::2012-07-30 20:04:12,231::clientIF::227::vds::(prepareVolumePath) prepared volume path: /rhev/data-center/b8423a1f-8889-469a-b2fa-39ab78ac3a57/4f38a996-dbeb-4981-885b-742b46a4714f/images/74512746-e10f-4416-a384-baa29d92cde5/5c99c73c-0b11-4213-9ff5-20817d201f1e
Thread-13746::DEBUG::2012-07-30 20:04:12,232::task::588::TaskManager.Task::(_updateState) Task=`79f8dab6-c04a-4242-b31b-9f7edabc460a`::moving from state init -> state preparing
Thread-13746::INFO::2012-07-30 20:04:12,232::logUtils::37::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='4f38a996-dbeb-4981-885b-742b46a4714f', spUUID='b8423a1f-8889-469a-b2fa-39ab78ac3a57', imgUUID='74512746-e10f-4416-a384-baa29d92cde5', volUUID='5c99c73c-0b11-4213-9ff5-20817d201f1e', options=None)
Thread-13746::DEBUG::2012-07-30 20:04:12,233::resourceManager::175::ResourceManager.Request::(__init__) ResName=`Storage.4f38a996-dbeb-4981-885b-742b46a4714f`ReqID=`22aa013c-3de3-4dae-8152-5f54f3fde22c`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource'
Thread-13746::DEBUG::2012-07-30 20:04:12,233::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' for lock type 'shared'
Thread-13746::DEBUG::2012-07-30 20:04:12,234::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' is free. Now locking as 'shared' (1 active user)
Thread-13746::DEBUG::2012-07-30 20:04:12,234::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.4f38a996-dbeb-4981-885b-742b46a4714f`ReqID=`22aa013c-3de3-4dae-8152-5f54f3fde22c`::Granted request
Thread-13746::DEBUG::2012-07-30 20:04:12,235::task::817::TaskManager.Task::(resourceAcquired) Task=`79f8dab6-c04a-4242-b31b-9f7edabc460a`::_resourcesAcquired: Storage.4f38a996-dbeb-4981-885b-742b46a4714f (shared)
Thread-13746::DEBUG::2012-07-30 20:04:12,235::task::978::TaskManager.Task::(_decref) Task=`79f8dab6-c04a-4242-b31b-9f7edabc460a`::ref 1 aborting False
Thread-13746::INFO::2012-07-30 20:04:12,236::logUtils::39::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '5368709120', 'apparentsize': '5368709120'}
Thread-13746::DEBUG::2012-07-30 20:04:12,236::task::1172::TaskManager.Task::(prepare) Task=`79f8dab6-c04a-4242-b31b-9f7edabc460a`::finished: {'truesize': '5368709120', 'apparentsize': '5368709120'}
Thread-13746::DEBUG::2012-07-30 20:04:12,237::task::588::TaskManager.Task::(_updateState) Task=`79f8dab6-c04a-4242-b31b-9f7edabc460a`::moving from state preparing -> state finished
Thread-13746::DEBUG::2012-07-30 20:04:12,237::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.4f38a996-dbeb-4981-885b-742b46a4714f': < ResourceRef 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f', isValid: 'True' obj: 'None'>}
Thread-13746::DEBUG::2012-07-30 20:04:12,238::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-13746::DEBUG::2012-07-30 20:04:12,238::resourceManager::538::ResourceManager::(releaseResource) Trying to release resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f'
Thread-13746::DEBUG::2012-07-30 20:04:12,239::resourceManager::553::ResourceManager::(releaseResource) Released resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' (0 active users)
Thread-13746::DEBUG::2012-07-30 20:04:12,239::resourceManager::558::ResourceManager::(releaseResource) Resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' is free, finding out if anyone is waiting for it.
Thread-13746::DEBUG::2012-07-30 20:04:12,240::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f', Clearing records.
Thread-13746::DEBUG::2012-07-30 20:04:12,240::task::978::TaskManager.Task::(_decref) Task=`79f8dab6-c04a-4242-b31b-9f7edabc460a`::ref 0 aborting False
Thread-13746::DEBUG::2012-07-30 20:04:12,242::libvirtvm::1472::vm.Vm::(hotplugDisk) vmId=`3f6cb042-2be6-4c81-8429-56e6d803f438`::Hotplug disk xml: <disk device="disk" snapshot="no" type="block">
Thread-13746::ERROR::2012-07-30 20:04:12,530::libvirtvm::1477::vm.Vm::(hotplugDisk) vmId=`3f6cb042-2be6-4c81-8429-56e6d803f438`::Hotplug failed
Thread-13746::DEBUG::2012-07-30 20:04:12,531::task::588::TaskManager.Task::(_updateState) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::moving from state init -> state preparing
Thread-13746::INFO::2012-07-30 20:04:12,532::logUtils::37::dispatcher::(wrapper) Run and protect: teardownImage(sdUUID='4f38a996-dbeb-4981-885b-742b46a4714f', spUUID='b8423a1f-8889-469a-b2fa-39ab78ac3a57', imgUUID='74512746-e10f-4416-a384-baa29d92cde5', volUUID=None)
Thread-13746::DEBUG::2012-07-30 20:04:12,532::resourceManager::175::ResourceManager.Request::(__init__) ResName=`Storage.4f38a996-dbeb-4981-885b-742b46a4714f`ReqID=`bb770ba2-4633-4e76-9638-814e2115a150`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource'
Thread-13746::DEBUG::2012-07-30 20:04:12,533::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' for lock type 'shared'
Thread-13746::DEBUG::2012-07-30 20:04:12,533::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' is free. Now locking as 'shared' (1 active user)
Thread-13746::DEBUG::2012-07-30 20:04:12,534::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.4f38a996-dbeb-4981-885b-742b46a4714f`ReqID=`bb770ba2-4633-4e76-9638-814e2115a150`::Granted request
Thread-13746::DEBUG::2012-07-30 20:04:12,534::task::817::TaskManager.Task::(resourceAcquired) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::_resourcesAcquired: Storage.4f38a996-dbeb-4981-885b-742b46a4714f (shared)
Thread-13746::DEBUG::2012-07-30 20:04:12,535::task::978::TaskManager.Task::(_decref) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::ref 1 aborting False
Thread-13746::DEBUG::2012-07-30 20:04:12,535::lvm::393::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-13746::DEBUG::2012-07-30 20:04:12,536::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1meita1343565|3514f0c5d6620000c|3514f0c5d6620026f|3514f0c5d66200270|3514f0c5d66200271|3514f0c5d66200272|3514f0c5d66200273%\\", \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 4f38a996-dbeb-4981-885b-742b46a4714f' (cwd None)
Thread-13746::DEBUG::2012-07-30 20:04:12,793::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 12884836352: Input/output error\n  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 12884893696: Input/output error\n  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 0: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/1meita1343565 was disabled\n'; <rc> = 0
Thread-13746::DEBUG::2012-07-30 20:04:12,804::lvm::422::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-13746::DEBUG::2012-07-30 20:04:12,805::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/dd iflag=direct skip=5 bs=512 if=/dev/4f38a996-dbeb-4981-885b-742b46a4714f/metadata count=1' (cwd None)
Thread-13746::DEBUG::2012-07-30 20:04:12,815::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000513591 s, 997 kB/s\n'; <rc> = 0
Thread-13746::DEBUG::2012-07-30 20:04:12,816::misc::334::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000513591 s, 997 kB/s'], size: 512
Thread-13746::DEBUG::2012-07-30 20:04:12,817::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/dd iflag=direct skip=5 bs=512 if=/dev/4f38a996-dbeb-4981-885b-742b46a4714f/metadata count=1' (cwd None)
Thread-13746::DEBUG::2012-07-30 20:04:12,826::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000451279 s, 1.1 MB/s\n'; <rc> = 0
Thread-13746::DEBUG::2012-07-30 20:04:12,826::misc::334::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000451279 s, 1.1 MB/s'], size: 512
Thread-13746::INFO::2012-07-30 20:04:12,827::image::319::Storage.Image::(getChain) sdUUID=4f38a996-dbeb-4981-885b-742b46a4714f imgUUID=74512746-e10f-4416-a384-baa29d92cde5 chain=[<storage.blockVolume.BlockVolume object at 0x2aa5750>] 
Thread-13746::DEBUG::2012-07-30 20:04:12,828::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1meita1343565|3514f0c5d6620000c|3514f0c5d6620026f|3514f0c5d66200270|3514f0c5d66200271|3514f0c5d66200272|3514f0c5d66200273%\\", \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --autobackup n --available n 4f38a996-dbeb-4981-885b-742b46a4714f/5c99c73c-0b11-4213-9ff5-20817d201f1e' (cwd None)
Thread-13746::DEBUG::2012-07-30 20:04:17,914::__init__::1164::Storage.Misc.excCmd::(_log) FAILED: <err> = '  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 12884836352: Input/output error\n  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 12884893696: Input/output error\n  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 0: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/mapper/1meita1343565 was disabled\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  device-mapper: remove ioctl on  failed: Device or resource busy\n  Unable to deactivate 4f38a996--dbeb--4981--885b--742b46a4714f-5c99c73c--0b11--4213--9ff5--20817d201f1e (253:41)\n'; <rc> = 5
Thread-13746::DEBUG::2012-07-30 20:04:17,920::lvm::478::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-13746::DEBUG::2012-07-30 20:04:17,920::lvm::490::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-13746::ERROR::2012-07-30 20:04:17,921::task::853::TaskManager.Task::(_setError) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::Unexpected error
Thread-13746::DEBUG::2012-07-30 20:04:17,922::task::872::TaskManager.Task::(_run) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::Task._run: 8a565abc-6792-42f1-b326-d2f4d2ce03d1 ('4f38a996-dbeb-4981-885b-742b46a4714f', 'b8423a1f-8889-469a-b2fa-39ab78ac3a57', '74512746-e10f-4416-a384-baa29d92cde5') {} failed - stopping task
Thread-13746::DEBUG::2012-07-30 20:04:17,922::task::1199::TaskManager.Task::(stop) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::stopping in state preparing (force False)
Thread-13746::DEBUG::2012-07-30 20:04:17,923::task::978::TaskManager.Task::(_decref) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::ref 1 aborting True
Thread-13746::INFO::2012-07-30 20:04:17,923::task::1157::TaskManager.Task::(prepare) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::aborting: Task is aborted: 'Cannot deactivate Logical Volume' - code 552
Thread-13746::DEBUG::2012-07-30 20:04:17,924::task::1162::TaskManager.Task::(prepare) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::Prepare: aborted: Cannot deactivate Logical Volume
Thread-13746::DEBUG::2012-07-30 20:04:17,924::task::978::TaskManager.Task::(_decref) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::ref 0 aborting True
Thread-13746::DEBUG::2012-07-30 20:04:17,925::task::913::TaskManager.Task::(_doAbort) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::Task._doAbort: force False
Thread-13746::DEBUG::2012-07-30 20:04:17,925::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-13746::DEBUG::2012-07-30 20:04:17,926::task::588::TaskManager.Task::(_updateState) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::moving from state preparing -> state aborting
Thread-13746::DEBUG::2012-07-30 20:04:17,926::task::537::TaskManager.Task::(__state_aborting) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::_aborting: recover policy none
Thread-13746::DEBUG::2012-07-30 20:04:17,927::task::588::TaskManager.Task::(_updateState) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::moving from state aborting -> state failed
Thread-13746::DEBUG::2012-07-30 20:04:17,927::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.4f38a996-dbeb-4981-885b-742b46a4714f': < ResourceRef 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f', isValid: 'True' obj: 'None'>}
Thread-13746::DEBUG::2012-07-30 20:04:17,928::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-13746::DEBUG::2012-07-30 20:04:17,928::resourceManager::538::ResourceManager::(releaseResource) Trying to release resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f'
Thread-13746::DEBUG::2012-07-30 20:04:17,929::resourceManager::553::ResourceManager::(releaseResource) Released resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' (0 active users)
Thread-13746::DEBUG::2012-07-30 20:04:17,929::resourceManager::558::ResourceManager::(releaseResource) Resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f' is free, finding out if anyone is waiting for it.
Thread-13746::DEBUG::2012-07-30 20:04:17,930::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.4f38a996-dbeb-4981-885b-742b46a4714f', Clearing records.
Thread-13746::ERROR::2012-07-30 20:04:17,930::dispatcher::66::Storage.Dispatcher.Protect::(run) {'status': {'message': 'Cannot deactivate Logical Volume: (\'General Storage Exception: ("5 [] [\\\'  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 12884836352: Input/output error\\\', \\\'  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 12884893696: Input/output error\\\', \\\'  /dev/mapper/1meita1343565: read failed after 0 of 4096 at 0: Input/output error\\\', \\\'  WARNING: Error counts reached a limit of 3. Device /dev/mapper/1meita1343565 was disabled\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  device-mapper: remove ioctl on  failed: Device or resource busy\\\', \\\'  Unable to deactivate 4f38a996--dbeb--4981--885b--742b46a4714f-5c99c73c--0b11--4213--9ff5--20817d201f1e (253:41)\\\']\\\\n4f38a996-dbeb-4981-885b-742b46a4714f/[\\\'5c99c73c-0b11-4213-9ff5-20817d201f1e\\\']",)\',)', 'code': 552}}
Thread-13746::DEBUG::2012-07-30 20:04:17,931::BindingXMLRPC::870::vds::(wrapper) return vmHotplugDisk with {'status': {'message': "internal error unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized", 'code': 45}}


failing lvchange command:


'/usr/bin/sudo -n /sbin/lvm lvchange --config " devices { preferred_n
ames = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1meita1343565|3514f0c5d6620000c|3514f0c
5d6620026f|3514f0c5d66200270|3514f0c5d66200271|3514f0c5d66200272|3514f0c5d66200273%\\", \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wa
it_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --autobackup n --available n 4f38a996-dbeb-4981-885b-742b46a4714f/5c99c73c-0b11-4213-9ff5-2081
7d201f1e' (cwd None)

stat of the lv:

  LV                                   VG                                   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
  5c99c73c-0b11-4213-9ff5-20817d201f1e 4f38a996-dbeb-4981-885b-742b46a4714f -wi-ao-- 5.00g  

manual run of lvchange command with regardless of vdsm:

[root@nott-vds2 ~]# lvchange -a n /dev/4f38a996-dbeb-4981-885b-742b46a4714f/5c99c73c-0b11-4213-9ff5-20817d201f1e
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  Unable to deactivate 4f38a996--dbeb--4981--885b--742b46a4714f-5c99c73c--0b11--4213--9ff5--20817d201f1e (253:41)

I suspect that hot-unplug didn't work well and released the lv (its attribute should be witout 'o' after unplug - meaning, open).

Comment 1 Haim 2012-07-30 14:03:41 UTC
Created attachment 601283 [details]
vdsm log

Comment 5 Ayal Baron 2012-08-01 21:24:37 UTC
Haim, can you check which process is holding the device? could be that qemu just didn't close the FD or something.
Which version of kvm-qrmu are you using?

Comment 6 Haim 2012-08-02 07:29:55 UTC
the disk is locked by qemu process:

[root@nott-vds2 ~]# ls -l /rhev/data-center/44de127a-01b2-4295-8959-4733109c0f93/ce59f7e8-d3c6-49a5-a4f4-d55586863278/images/bf46d4bc-4588-40ac-a08c-f8f710c85718/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e
lrwxrwxrwx. 1 vdsm kvm 78 Aug  2 13:35 /rhev/data-center/44de127a-01b2-4295-8959-4733109c0f93/ce59f7e8-d3c6-49a5-a4f4-d55586863278/images/bf46d4bc-4588-40ac-a08c-f8f710c85718/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e -> /dev/ce59f7e8-d3c6-49a5-a4f4-d55586863278/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e
[root@nott-vds2 ~]# lvs /dev/ce59f7e8-d3c6-49a5-a4f4-d55586863278/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e 
  LV                                   VG                                   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
  e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e ce59f7e8-d3c6-49a5-a4f4-d55586863278 -wi-ao-- 1.00g                                           
[root@nott-vds2 ~]# lvchange -a n lvs /dev/ce59f7e8-d3c6-49a5-a4f4-d55586863278/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e
  Volume group "lvs" not found
  Skipping volume group lvs
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  device-mapper: remove ioctl on  failed: Device or resource busy
  Unable to deactivate ce59f7e8--d3c6--49a5--a4f4--d55586863278-e44b57d4--1dc2--4fc4--8ece--c3eedbf9b92e (253:14)
[root@nott-vds2 ~]# lsof | /dev/ce59f7e8-d3c6-49a5-a4f4-d55586863278/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e
-bash: /dev/ce59f7e8-d3c6-49a5-a4f4-d55586863278/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e: Permission denied
[root@nott-vds2 ~]# lsof  /dev/ce59f7e8-d3c6-49a5-a4f4-d55586863278/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e
COMMAND    PID USER   FD   TYPE DEVICE   SIZE/OFF   NODE NAME
qemu-kvm 17755 qemu   10u   BLK 253,14 0x40000000 993119 /dev/ce59f7e8-d3c6-49a5-a4f4-d55586863278/../dm-14

Comment 7 Haim 2012-08-02 07:39:43 UTC
additonal info, libvirt xml lacks the disk indicating that it was indeed unpluged (or at least, made libvirt think it as unplagged), but qemu process shows it still plugged:

[root@nott-vds2 ~]# virsh -r dumpxml 1
<domain type='kvm' id='1'>
  <name>new_vm</name>
  <uuid>d7810094-bf60-4651-9388-5c3f2a9eae4f</uuid>
  <memory unit='KiB'>524288</memory>
  <currentMemory unit='KiB'>524288</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <cputune>
    <shares>1020</shares>
  </cputune>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>Red Hat</entry>
      <entry name='product'>RHEV Hypervisor</entry>
      <entry name='version'>6Server-6.3.0.3.el6</entry>
      <entry name='serial'>38373035-3536-4247-3830-333334344130_78:E7:D1:E4:8C:58</entry>
      <entry name='uuid'>d7810094-bf60-4651-9388-5c3f2a9eae4f</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='rhel6.3.0'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Conroe</model>
    <topology sockets='1' cores='1' threads='1'/>
  </cpu>
  <clock offset='variable' adjustment='0'>
    <timer name='rtc' tickpolicy='catchup'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source startupPolicy='optional'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <serial></serial>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:1a:4a:23:61:5d'/>
      <source bridge='rhevm'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='virtio' port='0'/>
      <alias name='console0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/new_vm.com.redhat.rhevm.vdsm'/>
      <target type='virtio' name='com.redhat.rhevm.vdsm'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <graphics type='spice' port='5900' tlsPort='5901' autoport='yes' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' address='10.35.115.11' network='vdsm-rhevm'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
    </graphics>
    <video>
      <model type='qxl' vram='65536' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c403,c698</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c403,c698</imagelabel>
  </seclabel>
</domain>

[root@nott-vds2 ~]# virsh -r dumpxml 1 | less
[root@nott-vds2 ~]# pgrep qemu 
17755
[root@nott-vds2 ~]# ps -ww `pgrep qemu `
  PID TTY      STAT   TIME COMMAND
17755 ?        Sl     0:38 /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name new_vm -uuid d7810094-bf60-4651-9388-5c3f2a9eae4f -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=6Server-6.3.0.3.el6,serial=38373035-3536-4247-3830-333334344130_78:E7:D1:E4:8C:58,uuid=d7810094-bf60-4651-9388-5c3f2a9eae4f -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/new_vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2012-08-02T10:35:20,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/44de127a-01b2-4295-8959-4733109c0f93/ce59f7e8-d3c6-49a5-a4f4-d55586863278/images/bf46d4bc-4588-40ac-a08c-f8f710c85718/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e,if=none,id=drive-virtio-disk0,format=qcow2,serial=bf46d4bc-4588-40ac-a08c-f8f710c85718,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:23:61:5d,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/new_vm.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -spice port=5900,tls-port=5901,addr=10.35.115.11,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x

Comment 8 Eduardo Warszawski 2012-08-02 09:17:30 UTC
The hot-plug failed with:
libvirtError: internal error unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized

Engine tries to teardownImage and fails:
lvmchange -a n fails because the device remains open.

Thread-13746::INFO::2012-07-30 20:04:12,532::logUtils::37::dispatcher::(wrapper) Run and protect: teardownImage(sdUUID='4f38a996-dbeb-4981-885b-742b46a4714f', spUUID='b8423a1f-8889-469a-b2fa-39ab78ac3a57', imgUUID='74512746-e10f-4416-a38
4-baa29d92cde5', volUUID=None)

Thread-13746::ERROR::2012-07-30 20:04:17,921::task::853::TaskManager.Task::(_setError) Task=`8a565abc-6792-42f1-b326-d2f4d2ce03d1`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2778, in teardownImage
    img.teardown(sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/image.py", line 358, in teardown
    volUUIDs=[vol.volUUID for vol in chain])
  File "/usr/share/vdsm/storage/blockSD.py", line 847, in deactivateVolumes
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/share/vdsm/storage/lvm.py", line 1034, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/share/vdsm/storage/lvm.py", line 719, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: 
<snip>
 Device or resource busy\n  Unable to deactivate 4f38a996--dbeb--4981--885b--742b46a4714f-5c99c73c--0b11--4213--9ff5--20817d201f1e (253:41)\n'; <rc> = 5

Comment 9 Eduardo Warszawski 2012-08-02 09:27:51 UTC
Created attachment 601910 [details]
failed attach vdsm, libvirt, qemu logs.

Comment 10 Eduardo Warszawski 2012-08-02 09:33:17 UTC
Reproduced: hotplug fails then teardownImage fails too.

From vdsm log:
Thread-1549::DEBUG::2012-08-02 13:36:41,612::libvirtvm::1472::vm.Vm::(hotplugDisk) vmId=`d7810094-bf60-4651-9388-5c3f2a9eae4f`::Hotplug disk xml: <disk device="disk" snapshot="no" type="block">
        <address  domain="0x0000"  function="0x0"  slot="0x05"  type="pci" bus="0x00"/>
        <source dev="/rhev/data-center/44de127a-01b2-4295-8959-4733109c0f93/ce59f7e8-d3c6-49a5-a4f4-d55586863278/images/bf46d4bc-4588-40ac-a08c-f8f710c85718/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e"/>
        <target bus="virtio" dev="vdb"/>
        <serial>bf46d4bc-4588-40ac-a08c-f8f710c85718</serial>
        <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/>
</disk>

Thread-1549::ERROR::2012-08-02 13:36:41,869::libvirtvm::1477::vm.Vm::(hotplugDisk) vmId=`d7810094-bf60-4651-9388-5c3f2a9eae4f`::Hotplug failed
Traceback (most recent call last):
  File "/usr/share/vdsm/libvirtvm.py", line 1475, in hotplugDisk
    self._dom.attachDevice(driveXml)
  File "/usr/share/vdsm/libvirtvm.py", line 491, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 400, in attachDevice
    if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirtError: internal error unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized


Thread-1549::INFO::2012-08-02 13:36:41,874::logUtils::37::dispatcher::(wrapper) Run and protect: teardownImage(sdUUID='ce59f7e8-d3c6-49a5-a4f4-d55586863278', spUUID='44de127a-01b2-4295-8959-4733109c0f93', imgUUID='bf46d4bc-4588-40ac-a08c-f8f710c85718', volUUID=None)

Thread-1549::ERROR::2012-08-02 13:36:47,181::task::853::TaskManager.Task::(_setError) Task=`71a55d03-c328-4276-a611-bd393f325ef7`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2778, in teardownImage
    img.teardown(sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/image.py", line 358, in teardown
    volUUIDs=[vol.volUUID for vol in chain])
  File "/usr/share/vdsm/storage/blockSD.py", line 847, in deactivateVolumes
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/share/vdsm/storage/lvm.py", line 1034, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/share/vdsm/storage/lvm.py", line 719, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume:
<snip>
device-mapper: remove ioctl on  failed: Device or resource busy\n  Unable to deactivate ce59f7e8--d3c6--49a5--a4f4--d55586863278-e44b57d4--1dc2--4fc4--8ece--c3eedbf9b92e (253:14)\n'; <rc> = 5

From libvirt log:
2012-08-02 10:36:41.612+0000: 17908: debug : virDomainAttachDevice:9185 : dom=0x7fea1402c200, (VM: name=new_vm, uuid=d7810094-bf60-4651-9388-5c3f2a9eae4f), xml=<disk device="disk" snapshot="no" type="block">
        <address  domain="0x0000"  function="0x0"  slot="0x05"  type="pci" bus="0x00"/>
        <source dev="/rhev/data-center/44de127a-01b2-4295-8959-4733109c0f93/ce59f7e8-d3c6-49a5-a4f4-d55586863278/images/bf46d4bc-4588-40ac-a08c-f8f710c85718/e44b57d4-1dc2-4fc4-8ece-c3eedbf9b92e"/>
        <target bus="virtio" dev="vdb"/>
        <serial>bf46d4bc-4588-40ac-a08c-f8f710c85718</serial>
        <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/>
</disk>

2012-08-02 10:36:41.869+0000: 17908: error : virNetClientProgramDispatchError:174 : internal error unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized

Comment 11 Ayal Baron 2012-08-06 10:23:34 UTC
This only happens on VMs without an OS, this is not a beta blocker.

Comment 12 Eduardo Warszawski 2012-08-16 08:41:31 UTC
Was verified by QE that the hotplug always fails for VM's without an OS.
Passing to qemu.

Comment 14 Chao Yang 2012-08-22 08:41:36 UTC
Can reproduce this issue with libvirt + qemu-kvm-rhev:
libvirt-0.9.10-21.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.304.el6.x86_64

Steps:
1. start a VM(without os installed) with a disk. will dumpxml test later.
virsh # start test
Domain test started
2. hot unplug this disk by chayang-hotunplug.xml
virsh # detach-device test /root/chayang-hotunplug.xml 
Device detached successfully
3. hot plug this disk by chayang-hotplug.xml
virsh # attach-device test /root/chayang-hotplug.xml
error: Failed to attach device from /root/chayang-hotplug.xml
error: internal error unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized

xml used to start test domain:
-----------------------------
virsh # dumpxml test
...
<disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/home/test-1.raw'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
...

xml used to hotunplug and hotplug disk:
--------------------------------------
# cat /root/chayang-hotunplug.xml
<disk device="disk" snapshot="no" type="block">
        <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/>
        <source dev="/home/test-1.raw"/>
        <target bus="virtio" dev="vda"/>
        <serial></serial>
        <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
</disk>

# diff chayang-hotunplug.xml chayang-hotplug.xml 
4c4
<         <target bus="virtio" dev="vda"/>
---
>         <target bus="virtio" dev="vdb"/>


Addional info:
-------------
Change slot="0x6" to slot="0x7" in chayang-hotplug.xml will make this failure disappear.
# diff chayang-hotunplug.xml chayang-hotplug.xml 
2c2
<         <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/>
---
>         <address bus="0x00" domain="0x0000" function="0x0" slot="0x07" type="pci"/>
4c4
<         <target bus="virtio" dev="vda"/>
---
>         <target bus="virtio" dev="vdb"/>

Comment 15 Chao Yang 2012-08-22 08:51:39 UTC
This time just try to reproduce with qemu-kvm-rhev directly.
qemu-kvm-rhev-0.12.1.2-2.304.el6.x86_64

Steps:
1. start a VM with 2 disks attached(no os installed)
CLI:
# /usr/libexec/qemu-kvm -M rhel6.3.0 -enable-kvm -m 1024 -smp 1 -name test -monitor stdio -boot menu=on -drive file=removable.raw,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -net none -vnc :10 -drive file=removable-1.raw,if=none,id=drive-virtio-disk1,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1

2. hot unplug the 2nd disk by:
(qemu) device_del virtio-disk1
(qemu)  __com.redhat_drive_del drive-virtio-disk1

3. hot plug again this disk:
(qemu) __com.redhat_drive_add file=removable-1.raw,id=drive-virtio-disk2,format=raw,cache=none
(qemu) device_add virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk2,id=virtio-disk2
PCI: devfn 56 not available for virtio-blk-pci, in use by virtio-blk-pci
Device 'virtio-blk-pci' could not be initialized
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4. retry step 1~3 with a disk has os installed(rhel6.3) using same cli

Actual Result:
after step 4, no this failure happens.
(qemu) __com.redhat_drive_add file=removable-1.raw,id=drive-virtio-disk2,format=raw,cache=none
(qemu) device_add virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk2,id=virtio-disk2


So this issue:
1.  only happens without os installed when hot plugging a disk with same addr as the one before hot unplugging
2.  not happen without os installed when hot plugging a disk with *different* addr
3.  not happen if has os installed even with same addr as the one before hot unplugging
2.  with different addr

Comment 16 juzhang 2012-08-22 09:21:26 UTC
(In reply to comment #15)
> This time just try to reproduce with qemu-kvm-rhev directly.
> qemu-kvm-rhev-0.12.1.2-2.304.el6.x86_64
> 
> Steps:
> 1. start a VM with 2 disks attached(no os installed)
> CLI:
> # /usr/libexec/qemu-kvm -M rhel6.3.0 -enable-kvm -m 1024 -smp 1 -name test
> -monitor stdio -boot menu=on -drive
> file=removable.raw,if=none,id=drive-virtio-disk0,format=raw,cache=none
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,
> id=virtio-disk0 -net none -vnc :10 -drive
> file=removable-1.raw,if=none,id=drive-virtio-disk1,format=raw,cache=none
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,
> id=virtio-disk1
> 
> 2. hot unplug the 2nd disk by:
> (qemu) device_del virtio-disk1
> (qemu)  __com.redhat_drive_del drive-virtio-disk1
> 
> 3. hot plug again this disk:
> (qemu) __com.redhat_drive_add
> file=removable-1.raw,id=drive-virtio-disk2,format=raw,cache=none
> (qemu) device_add
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk2,
> id=virtio-disk2
> PCI: devfn 56 not available for virtio-blk-pci, in use by virtio-blk-pci
> Device 'virtio-blk-pci' could not be initialized
Seems the key reason is qemu can not delete the block cleanly without os installnation.

Hi,  Markus

Do we support hotplug device if without os installnation?

Comment 17 Markus Armbruster 2012-08-22 12:10:34 UTC
Unless I'm missing something, this is NOTABUG.

PCI devices cannot be hot unplugged without guest cooperation.  I don't think a guest without an OS can cooperate.

Even when the guest cooperates, unplug is not instantaneous. device_del merely initiates the unplug.  It completes some unpredictable time later.  Your recipe fails unless the unplug initiated in step 2 completes before step 3.  Race condition.

Libvirt should encapsulate this problematic behavior of device_del. Bug 807023.

You can't reuse the same PCI slot until unplug completes.  If unplug doesn't complete, you can't reuse it, period.

You can force qemu-kvm to give up the image even when unplug doesn't work: use __com.redhat_drive_del.  Looks like a bad disk failure to the guest.  But it lets you reuse the same image safely with a different device, or a different guest.

Comment 21 Markus Armbruster 2012-09-07 09:33:36 UTC
Let me try to summarize what this bug is about now.

Context: when unplug of a block device completes, its backend is destroyed automatically.  This is a misfeature, but it's part of the API.

Bug: when plug of a block device fails, its backend isn't destroyed automatically.  This bug claims it should be, for consistency with unplug.

Is this correct?

Comment 22 Ayal Baron 2012-09-07 20:38:08 UTC
(In reply to comment #21)
> Let me try to summarize what this bug is about now.
> 
> Context: when unplug of a block device completes, its backend is destroyed
> automatically.  This is a misfeature, but it's part of the API.
> 
> Bug: when plug of a block device fails, its backend isn't destroyed
> automatically.  This bug claims it should be, for consistency with unplug.
> 
> Is this correct?

Actually I don't care about unplug,
I want to plug a device into a VM.
It fails.
I want to be able to plug it into a different VM (but can't because qemu doesn't release the resource even though it's not using it and never did).

Comment 23 Markus Armbruster 2012-09-10 11:32:39 UTC
TL;DR: This is not a qemu-kvm bug, but it might be a libvirt bug.

Attaching a block device takes two steps: first step creates the backend with QMP command __com.redhat_drive_add, second step creates the frontend with device_add.

Each of the two QMP commands either creates what it's supposed to create and succeeds, or doesn't create anything and fails.

The commands are clearly visible in log_libvirtd.log attached in comment#9.  Relevant part starts at 2012-08-02 10:36:41.613+0000. First step creates backend "drive-virtio-disk1" successfully.  Second step tries to create backend "virtio-disk1", but fails.  The failure does *not* destroy the backend.  This is expected qemu-kvm behavior.

Perhaps libvirt should clean up the backend created in the first step if the second step fails.  If you think it should, please change the bug's component to libvirt.  Else, let's close it NOTABUG.

Possible workaround: clean up the backend manually with__com.redhat_drive_del.

Comment 24 Dave Allan 2012-09-18 01:41:04 UTC
Jiri, can you weigh in here?

Comment 25 Jiri Denemark 2012-09-19 09:47:22 UTC
Yeah, I think libvirt should really undo the first part when the second part fails for any reason.

Comment 26 Dave Allan 2012-09-19 14:41:40 UTC
Ok, I've changed component to libvirt.

Comment 28 Jiri Denemark 2012-09-20 20:37:19 UTC
Patch sent upstream for review:
https://www.redhat.com/archives/libvir-list/2012-September/msg01498.html

Comment 29 Jiri Denemark 2012-09-21 10:39:09 UTC
Fixed upstream by v0.10.2-rc2-5-g8125113:

commit 8125113cdb61bb4352af8e80e66573282be9cf83
Author: Jiri Denemark <jdenemar>
Date:   Thu Sep 20 22:28:35 2012 +0200

    qemu: Fix failure path in disk hotplug
    
    Disk hotplug is a two phase action: qemuMonitorAddDrive followed by
    qemuMonitorAddDevice. When the first part succeeds but the second one
    fails, we need to rollback the drive addition.

Comment 31 weizhang 2012-09-25 03:40:22 UTC
Test on 
kernel-2.6.32-309.el6.x86_64
qemu-kvm-0.12.1.2-2.314.el6.x86_64
libvirt-0.10.2-1.el6.x86_64

# virsh attach-disk demo1 /var/lib/libvirt/images/disk.img vdb
Disk attached successfully

# virsh detach-disk demo1 vdb
Disk detached successfully

# virsh attach-disk demo1 /var/lib/libvirt/images/disk.img vdb
error: Failed to attach disk
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk1' for drive

still report error

Comment 32 Jiri Denemark 2012-09-25 07:19:38 UTC
You were actually testing bug 807023, which is related but still present.

This bugzilla deals with a bug in failure path when the final hotplug phase fails but libvirt does not correctly undo the part of the hotplug process that succeeded. You can see the hotplug failure in comment 10: unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized. However, I'm not sure how to easily achieve that, perhaps the guys who started this bugzilla could help you with the steps to verify this bug.

Comment 33 weizhang 2012-09-25 08:27:01 UTC
Retest on
kernel-2.6.32-309.el6.x86_64
qemu-kvm-0.12.1.2-2.314.el6.x86_64
libvirt-0.10.2-1.el6.x86_64

with steps
1. Start a guest with additional disk and without os in it
..
   <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/disk.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
..
2. start guest
3. Do detach-device for that disk
# cat disk.xml
    <disk type='file' device='disk'>
       <driver name='qemu' type='raw' cache='none'/>
       <source file='/var/lib/libvirt/images/disk.img'/>
       <target dev='vda' bus='virtio'/>
     </disk>
# virsh detach-device demo1 disk2.xml 
Device detached successfully

4. Attach-device with xml
    <disk type='file' device='disk'>
       <driver name='qemu' type='raw' cache='none'/>
       <source file='/var/lib/libvirt/images/disk.img'/>
       <target dev='vdb' bus='virtio'/>
     </disk>

**note** the difference is dev='vdb' instead of dev='vda'

# virsh attach-device demo1 disk2.xml 
error: Failed to attach device from disk2.xml
error: internal error unable to execute QEMU command 'device_add': Device 'virtio-blk-pci' could not be initialized

do it again
# virsh attach-device demo1 disk2.xml 
error: Failed to attach device from disk2.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-virtio-disk1' for drive

Hi jiri, Is that expected? I don't see any difference from old version

Comment 34 Jiri Denemark 2012-09-25 09:23:01 UTC
Could you setup libvirtd to print debug logs from qemu, redo the test, and attach the generated debug logs?

Comment 35 weizhang 2012-09-26 02:43:22 UTC
Created attachment 617362 [details]
libvirtd.log with filter qemu of debug log

Comment 37 Jiri Denemark 2012-10-08 07:39:00 UTC
Oh, I guess I previously misread the steps from comment #33. Bug 807023 is
still playing a key role there. The problem is that once the disk is
successfully attached (no matter if it's been attached before starting the
domain or hotplugged when it's running), unplugging the disk is a risky
business. Qemu monitor command to detach a disk is asynchronous (similarly to
shutdown request) and reports success after sending a detach request to a
guest OS. If there's no OS or the OS just ignores this detach request, the
disk will not be detached although libvirt will think the operation succeeded.
Thus attaching a new disk will fail. The target dev (vdb vs. vda) doesn't
really matter since it's just an ordering hint. And because the disk will
always be the only one presented to the domain, libvirt will use the same
drive-virtio-disk1 alias for it.

In other words, what you need to do is to hotplug a new disk to a domain and
somehow make the second part of this hotplug operation (device_add) fail.
However, I have no idea how to achieve that (except for hacking qemu).

Comment 39 Ayal Baron 2012-10-08 22:30:45 UTC
(In reply to comment #37)
> Oh, I guess I previously misread the steps from comment #33. Bug 807023 is
> still playing a key role there. The problem is that once the disk is
> successfully attached (no matter if it's been attached before starting the
> domain or hotplugged when it's running), unplugging the disk is a risky
> business. Qemu monitor command to detach a disk is asynchronous (similarly to
> shutdown request) and reports success after sending a detach request to a
> guest OS. If there's no OS or the OS just ignores this detach request, the
> disk will not be detached although libvirt will think the operation
> succeeded.
> Thus attaching a new disk will fail. The target dev (vdb vs. vda) doesn't
> really matter since it's just an ordering hint. And because the disk will
> always be the only one presented to the domain, libvirt will use the same
> drive-virtio-disk1 alias for it.
> 
> In other words, what you need to do is to hotplug a new disk to a domain and
> somehow make the second part of this hotplug operation (device_add) fail.
> However, I have no idea how to achieve that (except for hacking qemu).

Hi Jiri,
what about running a VM without virtio drivers installed and then try to attach a virtio disk? would that fail in the 'right' place?

Comment 40 Jiri Denemark 2012-10-09 07:48:43 UTC
Ayal, unfortunately attaching a virtio disk succeeds even without virtio
drivers. It looks like support for PCI hotplug is enough (and necessary) for
virtio disk hotplug support. The disk can even be unplugged and plugged in
again with no issues.

Anyway, one would think that since this bug was requested to be fixed in
z-stream, it's a serious one, which means it should also be easy to hit it...

I must admit the bug was so clear and fix so straightforward that I didn't
really try to reproduce it.

Comment 41 Ayal Baron 2012-10-09 22:42:15 UTC
(In reply to comment #40)
> Ayal, unfortunately attaching a virtio disk succeeds even without virtio
> drivers. It looks like support for PCI hotplug is enough (and necessary) for
> virtio disk hotplug support. The disk can even be unplugged and plugged in
> again with no issues.
> 
> Anyway, one would think that since this bug was requested to be fixed in
> z-stream, it's a serious one, which means it should also be easy to hit it...

I agree, I was just trying to help while Haim wasn't available...

> 
> I must admit the bug was so clear and fix so straightforward that I didn't
> really try to reproduce it.

Haim?

Comment 42 weizhang 2012-10-10 09:07:19 UTC
After talk with Jiri, find that it is better to verify this bug through code inspection. Download libvirt-0.10.2-1.el6.src.rpm and install and can find the code

    if (qemuCapsGet(priv->caps, QEMU_CAPS_DEVICE)) {
        ret = qemuMonitorAddDrive(priv->mon, drivestr);
        if (ret == 0) {
            ret = qemuMonitorAddDevice(priv->mon, devstr);
            if (ret < 0) {
                virErrorPtr orig_err = virSaveLastError();
                if (qemuMonitorDriveDel(priv->mon, drivestr) < 0) {
                    VIR_WARN("Unable to remove drive %s (%s) after failed "
                             "qemuMonitorAddDevice",
                             drivestr, devstr);
                }
                if (orig_err) {
                    virSetError(orig_err);
                    virFreeError(orig_err);
                }
            }
        }
    } else {

in src/qemu/qemu_hotplug.c line 250
So verify pass

Comment 44 errata-xmlrpc 2013-02-21 07:20:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0276.html


Note You need to log in before you can comment on or make changes to this bug.