Description of problem: running engine 3.3 (sha 3b106b7c0102f3125e5099b99bec1dddc2f6cf27) and vdsm-4.11.0-121.git082925a.el6.x86_64 sending a rest query to remove storage domains fails with hread-343::DEBUG::2013-07-11 11:00:12,475::lvm::310::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n device-mapper: remove ioctl on failed: Device or resource busy\n Unable to deactivate c7bf8bca--3a0c--4cd7--9f25--f0b35f4535fc-ids (253:49)\n Unable to deactivate logical volume "ids"\n'; <rc> = 5 Thread-343::DEBUG::2013-07-11 11:00:12,489::lvm::476::OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' got the operation mutex Thread-343::DEBUG::2013-07-11 11:00:12,489::lvm::479::OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' released the operation mutex Thread-343::DEBUG::2013-07-11 11:00:12,490::lvm::488::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex Thread-343::DEBUG::2013-07-11 11:00:12,490::lvm::490::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex Thread-343::ERROR::2013-07-11 11:00:12,490::task::850::TaskManager.Task::(_setError) Task=`62354b77-0af1-479c-833c-b74893c8a318`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2621, in formatStorageDomain self._recycle(sd) File "/usr/share/vdsm/storage/hsm.py", line 2567, in _recycle dom.format(dom.sdUUID) File "/usr/share/vdsm/storage/blockSD.py", line 888, in format lvm.removeVG(sdUUID) File "/usr/share/vdsm/storage/lvm.py", line 897, in removeVG raise se.VolumeGroupRemoveError("VG %s remove failed." % vgName) VolumeGroupRemoveError: Volume Group remove error: ('VG c7bf8bca-3a0c-4cd7-9f25-f0b35f4535fc remove failed.',) although it says that the device is locked, I suspect that it is a bug because the same command flow works successfully on vdsm 3.2 using the same engine rpms
remove VG is failing because the IDS LV is open. Is sanlock not deactivated on this domain?