Bug 1215427 - Creating Block Domain using a "dirty" LUN that contains a partition table fails upon PV creation , "pvcreate failed with rc=5"
Summary: Creating Block Domain using a "dirty" LUN that contains a partition table fai...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: oVirt
Classification: Retired
Component: vdsm
Version: 3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.5.5
Assignee: Fred Rolland
QA Contact: Aharon Canan
URL:
Whiteboard: storage
: 1524308 (view as bug list)
Depends On:
Blocks: 1185865 1218657
TreeView+ depends on / blocked
 
Reported: 2015-04-26 13:43 UTC by Ori Gofen
Modified: 2019-04-28 13:04 UTC (History)
15 users (show)

Fixed In Version:
Clone Of:
: 1218657 (view as bug list)
Environment:
Last Closed: 2015-08-26 14:44:01 UTC
oVirt Team: Storage
Embargoed:


Attachments (Terms of Use)
logs (1.31 MB, application/x-gzip)
2015-04-26 13:43 UTC, Ori Gofen
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1542377 0 unspecified CLOSED Didn't create VG on the dirty iSCSI (or FC) storage while deploying HE based otopi 2021-02-22 00:41:40 UTC
Red Hat Knowledge Base (Solution) 3371241 0 None None None 2018-03-06 01:04:31 UTC

Internal Links: 1542377

Description Ori Gofen 2015-04-26 13:43:25 UTC
Created attachment 1019024 [details]
logs

Description of problem:

Creating or extending a Block domain while using "dirty" Luns as targets fails.
This behavior is similar to bz #1185865, but happens regardless to what protocol (have reproduced with xml, Json) is used.

The Function _initpvs fails with "PhysDevInitializationError"

engine log:
2015-04-26 16:03:44,871 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] Failed in 'CreateVGVDS' method
2015-04-26 16:03:44,872 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand' return value 'OneUuidReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=601, mMessage=Failed to initialize physical device: ("['/dev/mapper/360060160f4a03000fa65675991dbe311', '/dev/mapper/360060160f4a03000fe65675991dbe311', '/dev/mapper/360060160f4a030007beed85291dbe311', '/dev/mapper/360060160f4a03000fc65675991dbe311', '/dev/mapper/360060160f4a03000fb65675991dbe311', '/dev/mapper/360060160f4a030007ceed85291dbe311']",)]]'
2015-04-26 16:03:44,878 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] HostName = fury66.tlv.redhat.com
2015-04-26 16:03:44,880 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] Command 'CreateVGVDSCommand(HostName = fury66.tlv.redhat.com, HostId = ebbc3458-d9cd-45a8-bb8d-3e2ebe0b4d6f, storageDomainId=9c5b47ed-3fd4-4022-987e-3d33985b751b, deviceList=[360060160f4a03000fa65675991dbe311, 360060160f4a03000fe65675991dbe311, 360060160f4a030007beed85291dbe311, 360060160f4a03000fc65675991dbe311, 360060160f4a03000fb65675991dbe311, 360060160f4a030007ceed85291dbe311], force=true)' execution failed: VDSGenericException: VDSErrorException: Failed to CreateVGVDS, error = Failed to initialize physical device: ("['/dev/mapper/360060160f4a03000fa65675991dbe311', '/dev/mapper/360060160f4a03000fe65675991dbe311', '/dev/mapper/360060160f4a030007beed85291dbe311', '/dev/mapper/360060160f4a03000fc65675991dbe311', '/dev/mapper/360060160f4a03000fb65675991dbe311', '/dev/mapper/360060160f4a030007ceed85291dbe311']",), code = 601
2015-04-26 16:03:44,888 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-5) [6a090930] FINISH, CreateVGVDSCommand, log id: 557da39b

vdsm:
Thread-1798::DEBUG::2015-04-26 16:07:39,125::lvm::301::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  Device /dev/mapper/360060160f4a03000fe65675991dbe311 not found (or ignored by filtering).\n'; <rc> = 5
Thread-1798::DEBUG::2015-04-26 16:07:39,125::lvm::492::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalida
te operation' got the operation mutex
Thread-1798::DEBUG::2015-04-26 16:07:39,126::lvm::495::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalida
te operation' released the operation mutex
Thread-1798::ERROR::2015-04-26 16:07:39,126::lvm::737::Storage.LVM::(_initpvs) pvcreate failed with rc=5
Thread-1798::ERROR::2015-04-26 16:07:39,126::lvm::738::Storage.LVM::(_initpvs) ['  Physical volume "/dev/mapper/360060160f4a03000fa65675991dbe311" successfully created', '  Physical volume "/dev/mapper/360060160f4a030007beed85291dbe311" successfully created', '  Physical volume "/dev/mapper/360060160f4a03000fc65675991dbe311" successfully created', '  Physical volume "/dev/mapper/360060160f4a03000fb65675991dbe311" successfully created'], ['  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!', '  Device /dev/mapper/360060160f4a03000fe65675991dbe311 not found (or ignored by filtering).']
Thread-1798::ERROR::2015-04-26 16:07:39,126::task::863::Storage.TaskManager.Task::(_setError) Task=`767b5c1d-7c56-45cb-bf99-4b69c54c237a`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 870, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2142, in createVG
    (force.capitalize() == "True")))
  File "/usr/share/vdsm/storage/lvm.py", line 920, in createVG
    _initpvs(pvs, metadataSize, force)
  File "/usr/share/vdsm/storage/lvm.py", line 739, in _initpvs
    raise se.PhysDevInitializationError(str(devices))
PhysDevInitializationError: Failed to initialize physical device: ("['/dev/mapper/360060160f4a03000fa65675991dbe311',

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Ori Gofen 2015-04-27 08:44:11 UTC
Version-Release number of selected component (if applicable):
3.6.0 master

How reproducible:
100%

Steps to Reproduce:
1.create Block domain using "dirty lunes"

Actual results:
operation fails

Expected results:
operation should be successful

Additional info:

Comment 2 Fred Rolland 2015-08-20 10:38:12 UTC
I could not reproduce.
Ori, can you provide exact steps needed ?

Comment 3 Ori Gofen 2015-08-20 13:37:26 UTC
Yes The steps are, creating a block domain from dirty luns.
I shall provide you with my environment for reproduction

Comment 4 Fred Rolland 2015-08-26 11:44:40 UTC
pvcreate will fail if a partition exists on the LUN, even with force flag.

If you run pvcreate with vvv for verbose ,and a partition is found , the following warning will be logged : 'Skipping: Partition table signature found'

pvcreate -ffvvv /dev/mapper/3600a09803753795a64244531644f7846
........
 /dev/mapper/3600a09803753795a64244531644f7846: Skipping: Partition table signature found [none:(nil)]
........
Device /dev/mapper/3600a09803753795a64244531644f7846 not found (or ignored by filtering).

In order to be able to be able to create the PV, the partition table needs to be deleted.
It can be done by zeroing the first blocks:
  dd if=/dev/zero of=/dev/mapper/3600a09803753795a64244531644f7846 bs=1M count=1


I don't think that this operation should be done by the application, as it can be destructive to user data.

I suggest to document this situation with explanation on how to fix manually.

Comment 5 Tal Nisan 2015-08-26 13:46:40 UTC
Allon, Yaniv, your thoughts?

Comment 6 Allon Mureinik 2015-08-26 14:44:01 UTC
(In reply to Fred Rolland from comment #4)
> I don't think that this operation should be done by the application, as it
> can be destructive to user data.
> 
> I suggest to document this situation with explanation on how to fix manually.
Agreed.
If LVM isn't solving this problem, neither should we.

Andrew - what the process for adding a limitation note to the product?

Comment 7 Andrew Dahms 2015-08-27 07:18:07 UTC
Hi Allon,

Thank you for the needinfo request.

Based on the explanation, this looks like it is not a known issue that will be fixed soon, and more a note that users must be aware of when they use LVM.

Now, the best thing to do would be to add a note to the chapter on storage to tell users that if they are using LVM, they would need to perform the step in comment #4.

What do you think?

Kind regards,

Andrew

Comment 8 Allon Mureinik 2015-08-27 10:18:02 UTC
(In reply to Andrew Dahms from comment #7)
> Hi Allon,
> 
> Thank you for the needinfo request.
> 
> Based on the explanation, this looks like it is not a known issue that will
> be fixed soon, and more a note that users must be aware of when they use LVM.
I agree with this assessment - This is an LVM limitation that will probably never be fixed. 

> Now, the best thing to do would be to add a note to the chapter on storage
> to tell users that if they are using LVM, they would need to perform the
> step in comment #4.
(In the case the lun has an old partition table on it, which should be an edge case of an edge case).

> What do you think?
Agreed.
Do we need a RHEV-docs bug to track this, or can we use this oVirt bug?

Comment 9 Andrew Dahms 2016-06-06 11:59:05 UTC
Hi Allon,

Thank you for the needinfo request, and my apologies for the delay in getting back to you.

I have created BZ#1343043 to cover this issue.

Kind regards,

Andrew

Comment 10 Allon Mureinik 2017-12-14 10:35:19 UTC
*** Bug 1524308 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.