Created attachment 1019024 [details] logs Description of problem: Creating or extending a Block domain while using "dirty" Luns as targets fails. This behavior is similar to bz #1185865, but happens regardless to what protocol (have reproduced with xml, Json) is used. The Function _initpvs fails with "PhysDevInitializationError" engine log: 2015-04-26 16:03:44,871 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] Failed in 'CreateVGVDS' method 2015-04-26 16:03:44,872 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand' return value 'OneUuidReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=601, mMessage=Failed to initialize physical device: ("['/dev/mapper/360060160f4a03000fa65675991dbe311', '/dev/mapper/360060160f4a03000fe65675991dbe311', '/dev/mapper/360060160f4a030007beed85291dbe311', '/dev/mapper/360060160f4a03000fc65675991dbe311', '/dev/mapper/360060160f4a03000fb65675991dbe311', '/dev/mapper/360060160f4a030007ceed85291dbe311']",)]]' 2015-04-26 16:03:44,878 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] HostName = fury66.tlv.redhat.com 2015-04-26 16:03:44,880 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] Command 'CreateVGVDSCommand(HostName = fury66.tlv.redhat.com, HostId = ebbc3458-d9cd-45a8-bb8d-3e2ebe0b4d6f, storageDomainId=9c5b47ed-3fd4-4022-987e-3d33985b751b, deviceList=[360060160f4a03000fa65675991dbe311, 360060160f4a03000fe65675991dbe311, 360060160f4a030007beed85291dbe311, 360060160f4a03000fc65675991dbe311, 360060160f4a03000fb65675991dbe311, 360060160f4a030007ceed85291dbe311], force=true)' execution failed: VDSGenericException: VDSErrorException: Failed to CreateVGVDS, error = Failed to initialize physical device: ("['/dev/mapper/360060160f4a03000fa65675991dbe311', '/dev/mapper/360060160f4a03000fe65675991dbe311', '/dev/mapper/360060160f4a030007beed85291dbe311', '/dev/mapper/360060160f4a03000fc65675991dbe311', '/dev/mapper/360060160f4a03000fb65675991dbe311', '/dev/mapper/360060160f4a030007ceed85291dbe311']",), code = 601 2015-04-26 16:03:44,888 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] FINISH, CreateVGVDSCommand, log id: 557da39b vdsm: Thread-1798::DEBUG::2015-04-26 16:07:39,125::lvm::301::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n Device /dev/mapper/360060160f4a03000fe65675991dbe311 not found (or ignored by filtering).\n'; <rc> = 5 Thread-1798::DEBUG::2015-04-26 16:07:39,125::lvm::492::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalida te operation' got the operation mutex Thread-1798::DEBUG::2015-04-26 16:07:39,126::lvm::495::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalida te operation' released the operation mutex Thread-1798::ERROR::2015-04-26 16:07:39,126::lvm::737::Storage.LVM::(_initpvs) pvcreate failed with rc=5 Thread-1798::ERROR::2015-04-26 16:07:39,126::lvm::738::Storage.LVM::(_initpvs) [' Physical volume "/dev/mapper/360060160f4a03000fa65675991dbe311" successfully created', ' Physical volume "/dev/mapper/360060160f4a030007beed85291dbe311" successfully created', ' Physical volume "/dev/mapper/360060160f4a03000fc65675991dbe311" successfully created', ' Physical volume "/dev/mapper/360060160f4a03000fb65675991dbe311" successfully created'], [' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!', ' Device /dev/mapper/360060160f4a03000fe65675991dbe311 not found (or ignored by filtering).'] Thread-1798::ERROR::2015-04-26 16:07:39,126::task::863::Storage.TaskManager.Task::(_setError) Task=`767b5c1d-7c56-45cb-bf99-4b69c54c237a`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 870, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 49, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2142, in createVG (force.capitalize() == "True"))) File "/usr/share/vdsm/storage/lvm.py", line 920, in createVG _initpvs(pvs, metadataSize, force) File "/usr/share/vdsm/storage/lvm.py", line 739, in _initpvs raise se.PhysDevInitializationError(str(devices)) PhysDevInitializationError: Failed to initialize physical device: ("['/dev/mapper/360060160f4a03000fa65675991dbe311', Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Version-Release number of selected component (if applicable): 3.6.0 master How reproducible: 100% Steps to Reproduce: 1.create Block domain using "dirty lunes" Actual results: operation fails Expected results: operation should be successful Additional info:
I could not reproduce. Ori, can you provide exact steps needed ?
Yes The steps are, creating a block domain from dirty luns. I shall provide you with my environment for reproduction
pvcreate will fail if a partition exists on the LUN, even with force flag. If you run pvcreate with vvv for verbose ,and a partition is found , the following warning will be logged : 'Skipping: Partition table signature found' pvcreate -ffvvv /dev/mapper/3600a09803753795a64244531644f7846 ........ /dev/mapper/3600a09803753795a64244531644f7846: Skipping: Partition table signature found [none:(nil)] ........ Device /dev/mapper/3600a09803753795a64244531644f7846 not found (or ignored by filtering). In order to be able to be able to create the PV, the partition table needs to be deleted. It can be done by zeroing the first blocks: dd if=/dev/zero of=/dev/mapper/3600a09803753795a64244531644f7846 bs=1M count=1 I don't think that this operation should be done by the application, as it can be destructive to user data. I suggest to document this situation with explanation on how to fix manually.
Allon, Yaniv, your thoughts?
(In reply to Fred Rolland from comment #4) > I don't think that this operation should be done by the application, as it > can be destructive to user data. > > I suggest to document this situation with explanation on how to fix manually. Agreed. If LVM isn't solving this problem, neither should we. Andrew - what the process for adding a limitation note to the product?
Hi Allon, Thank you for the needinfo request. Based on the explanation, this looks like it is not a known issue that will be fixed soon, and more a note that users must be aware of when they use LVM. Now, the best thing to do would be to add a note to the chapter on storage to tell users that if they are using LVM, they would need to perform the step in comment #4. What do you think? Kind regards, Andrew
(In reply to Andrew Dahms from comment #7) > Hi Allon, > > Thank you for the needinfo request. > > Based on the explanation, this looks like it is not a known issue that will > be fixed soon, and more a note that users must be aware of when they use LVM. I agree with this assessment - This is an LVM limitation that will probably never be fixed. > Now, the best thing to do would be to add a note to the chapter on storage > to tell users that if they are using LVM, they would need to perform the > step in comment #4. (In the case the lun has an old partition table on it, which should be an edge case of an edge case). > What do you think? Agreed. Do we need a RHEV-docs bug to track this, or can we use this oVirt bug?
Hi Allon, Thank you for the needinfo request, and my apologies for the delay in getting back to you. I have created BZ#1343043 to cover this issue. Kind regards, Andrew
*** Bug 1524308 has been marked as a duplicate of this bug. ***