Bug 1421941

Summary: pvcreate command in clone operation fails when using LVM2
Product: Red Hat Enterprise Linux 7 Reporter: Abdul Khumani <abdul.khumani>
Component: lvm2Assignee: LVM and device-mapper development team <lvm-team>
lvm2 sub component: LVM Metadata / lvmetad QA Contact: cluster-qe <cluster-qe>
Status: CLOSED WORKSFORME Docs Contact:
Severity: unspecified    
Priority: unspecified CC: abdul.khumani, agk, heinzm, jbrassow, msnitzer, prajnoha, prockai, teigland, zkabelac
Version: 7.3   
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-25 13:06:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Logs of failed pvcreate command
none
output of pvcreate -vvvv -ff command none

Description Abdul Khumani 2017-02-14 06:44:46 UTC
Created attachment 1250130 [details]
Logs of failed pvcreate command

Description of problem:
pvcreate operation during lun clone fails on Snapdrive for UNIX in RHEL 7.3

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 7.3

How reproducible:
Perform 'snapdrive snap connect' operation(clone from snapshot) in RHEL 7.3 environment with lvm2 and multipath enabled.


Steps to Reproduce:
1. Install Snapdrive for UNIX in RHEL 7.3 OS
2. Enable lvm2 and multipath services
3. Change snapdrive configuration in snapdrive.conf file and enable multipath
4. Perform snapdrive snap connect operation to perform clone(lun clone from a snapshot).

Actual results:
pvcreate operation runs during clone to initialize the disk after importing it and fails.

Expected results:
Clone operation should succeed without any issues.

Additional info:
SnapDrive for UNIX is a host-based storage and data management solution for UNIX environments. Earlier on RHEL 7, 7.1 & 7.2 snapdrive snap connect operation works after change in lvm.conf file by setting up use_lvmetad to '0' and stopping service 'lvm2-lvmetad.service'.
A bug regarding this issue was logged in Redhat bugzilla(bug# 1244153) which has been closed on RHEL 7.3. Clone operation on RHEL 7.3 fails even after applying this workaround.

Following are failed operation's logs-

pvcreate -ff -y /dev/mapper/3600a09804d543942782b494539436443 2>/dev/null
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA on /dev/mapper/3600a09804d543942782b494539436443 was already found on /dev/mapper/3600a09804d543942782b494539436442.
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA prefers device /dev/mapper/3600a09804d543942782b494539436442 because device is used by LV.
  Device /dev/mapper/3600a09804d543942782b494539436443 not found (or ignored by filtering).
 /usr/sbin/lvmdiskscan
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  /dev/loop0                                    [      16.00 MiB]
  /dev/rhel/root                                [      40.60 GiB]
  /dev/sda1                                     [       1.00 GiB]
  /dev/rhel/swap                                [       6.82 GiB]
  /dev/sda2                                     [      67.25 GiB] LVM physical volume
  /dev/mapper/360a980003246695a7124442f5a6d7036 [     100.00 MiB] LVM physical volume
  /dev/mapper/3600a09804d543942612b46595a317652 [     100.00 MiB] LVM physical volume
  /dev/mapper/3600a09804d543942612b46595a317878 [      10.00 MiB] LVM physical volume
  /dev/absan_SdDg/absan_SdHv                    [       8.00 MiB]
  /dev/mapper/3600a09804d543942782b494539436358 [      10.00 MiB] LVM physical volume
  /dev/abhisan_SdDg/abhisan_SdHv                [       8.00 MiB]
  /dev/mapper/3600a09804d543942612b46595a317939 [      10.00 MiB] LVM physical volume
  /dev/absan2_SdDg/absan2_SdHv                  [       8.00 MiB]
  /dev/mapper/3600a09804d543942612b46595a317942 [      10.00 MiB] LVM physical volume
  /dev/absan2-1_SdDg_1/absan2_SdHv_1            [       8.00 MiB]
  /dev/mapper/3600a09804d543942782b494539436442 [     100.00 MiB] LVM physical volume
  /dev/newtest_SdDg/newtest_SdHv                [      96.00 MiB]
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA on /dev/mapper/3600a09804d543942782b494539436443 was already found on /dev/mapper/3600a09804d543942782b494539436442.
  /dev/mapper/3600a09804d543942782b494539436443 [     100.00 MiB]
  /dev/aqk_SdDg/aqk_SdHv                        [      96.00 MiB]
  /dev/RHEL73_RRT_SdDg/RHEL73_RRT_SdHv          [      96.00 MiB]
  /dev/rhel/home                                [      19.82 GiB]
  9 disks
  4 partitions
  0 LVM physical volume whole disks
  8 LVM physical volumes
pvcreate -ff -y /dev/mapper/3600a09804d543942782b494539436443 2>/dev/null
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA on /dev/mapper/3600a09804d543942782b494539436443 was already found on /dev/mapper/3600a09804d543942782b494539436442.
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA prefers device /dev/mapper/3600a09804d543942782b494539436442 because device is used by LV.
  Device /dev/mapper/3600a09804d543942782b494539436443 not found (or ignored by filtering).
 [19266 9e6f0700]i,2,6,Operation::addErrorReport: (1) DiskGroup:/dev/mapper/newtest-1_SdDg_0 6 1952:fabricateDiskGroup failed for DG /dev/mapper/newtest-1_SdDg_0 : pvcreate failed for device /dev/mapper/3600a09804d543942782b494539436443

Comment 2 Zdenek Kabelac 2017-02-14 16:04:18 UTC
Please attach full '-vvvv' output.

But also make sure you are working in system with properly configured device - i.e. no duplicate device (misconfigured multipath?) are  present in your system.

The easy way could be to setup  lvm.conf  global/filter option and accept on valid subset of devices.

Comment 3 David Teigland 2017-02-14 17:04:18 UTC
lvm has recently become much more careful in handling duplicate PVs, and you are seeing the effects of those improvements here.  lvm provides a number of ways to resolve duplicate PVs, but pvcreate -ff has not (so far) been one of them.  We can
look into adding pvcreate -ff to the set of tools for handling duplicate PVs, but at the present time, the possible solutions you can investigate include:

- using a filter with pvcreate to select the device you want (ie comment 2)
- using vgimportclone.
- using 'pvremove -ff' on the duplicate PV, then pvcreate.

Comment 4 Abdul Khumani 2017-02-20 10:39:53 UTC
Created attachment 1255626 [details]
output of pvcreate -vvvv -ff command

Added the log file for pvcreate -vvvv -ff command.

Comment 5 David Teigland 2017-02-20 16:39:47 UTC
Thanks, the behavior is as expected.  Do any of the three methods in comment 3 work for you?

Comment 6 Abdul Khumani 2017-02-22 06:48:45 UTC
Yes using "'pvremove -ff' on the duplicate PV, then doing pvcreate worked for me". Thanks for your help and support.