RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1421941 - pvcreate command in clone operation fails when using LVM2
Summary: pvcreate command in clone operation fails when using LVM2
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-14 06:44 UTC by Abdul Khumani
Modified: 2021-09-03 12:41 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-25 13:06:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Logs of failed pvcreate command (3.62 KB, text/plain)
2017-02-14 06:44 UTC, Abdul Khumani
no flags Details
output of pvcreate -vvvv -ff command (125.28 KB, text/plain)
2017-02-20 10:39 UTC, Abdul Khumani
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1244153 0 unspecified CLOSED Clone operation fails when use_lvmetad is enabled 2021-09-03 12:36:51 UTC

Description Abdul Khumani 2017-02-14 06:44:46 UTC
Created attachment 1250130 [details]
Logs of failed pvcreate command

Description of problem:
pvcreate operation during lun clone fails on Snapdrive for UNIX in RHEL 7.3

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 7.3

How reproducible:
Perform 'snapdrive snap connect' operation(clone from snapshot) in RHEL 7.3 environment with lvm2 and multipath enabled.


Steps to Reproduce:
1. Install Snapdrive for UNIX in RHEL 7.3 OS
2. Enable lvm2 and multipath services
3. Change snapdrive configuration in snapdrive.conf file and enable multipath
4. Perform snapdrive snap connect operation to perform clone(lun clone from a snapshot).

Actual results:
pvcreate operation runs during clone to initialize the disk after importing it and fails.

Expected results:
Clone operation should succeed without any issues.

Additional info:
SnapDrive for UNIX is a host-based storage and data management solution for UNIX environments. Earlier on RHEL 7, 7.1 & 7.2 snapdrive snap connect operation works after change in lvm.conf file by setting up use_lvmetad to '0' and stopping service 'lvm2-lvmetad.service'.
A bug regarding this issue was logged in Redhat bugzilla(bug# 1244153) which has been closed on RHEL 7.3. Clone operation on RHEL 7.3 fails even after applying this workaround.

Following are failed operation's logs-

pvcreate -ff -y /dev/mapper/3600a09804d543942782b494539436443 2>/dev/null
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA on /dev/mapper/3600a09804d543942782b494539436443 was already found on /dev/mapper/3600a09804d543942782b494539436442.
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA prefers device /dev/mapper/3600a09804d543942782b494539436442 because device is used by LV.
  Device /dev/mapper/3600a09804d543942782b494539436443 not found (or ignored by filtering).
 /usr/sbin/lvmdiskscan
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  /dev/loop0                                    [      16.00 MiB]
  /dev/rhel/root                                [      40.60 GiB]
  /dev/sda1                                     [       1.00 GiB]
  /dev/rhel/swap                                [       6.82 GiB]
  /dev/sda2                                     [      67.25 GiB] LVM physical volume
  /dev/mapper/360a980003246695a7124442f5a6d7036 [     100.00 MiB] LVM physical volume
  /dev/mapper/3600a09804d543942612b46595a317652 [     100.00 MiB] LVM physical volume
  /dev/mapper/3600a09804d543942612b46595a317878 [      10.00 MiB] LVM physical volume
  /dev/absan_SdDg/absan_SdHv                    [       8.00 MiB]
  /dev/mapper/3600a09804d543942782b494539436358 [      10.00 MiB] LVM physical volume
  /dev/abhisan_SdDg/abhisan_SdHv                [       8.00 MiB]
  /dev/mapper/3600a09804d543942612b46595a317939 [      10.00 MiB] LVM physical volume
  /dev/absan2_SdDg/absan2_SdHv                  [       8.00 MiB]
  /dev/mapper/3600a09804d543942612b46595a317942 [      10.00 MiB] LVM physical volume
  /dev/absan2-1_SdDg_1/absan2_SdHv_1            [       8.00 MiB]
  /dev/mapper/3600a09804d543942782b494539436442 [     100.00 MiB] LVM physical volume
  /dev/newtest_SdDg/newtest_SdHv                [      96.00 MiB]
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA on /dev/mapper/3600a09804d543942782b494539436443 was already found on /dev/mapper/3600a09804d543942782b494539436442.
  /dev/mapper/3600a09804d543942782b494539436443 [     100.00 MiB]
  /dev/aqk_SdDg/aqk_SdHv                        [      96.00 MiB]
  /dev/RHEL73_RRT_SdDg/RHEL73_RRT_SdHv          [      96.00 MiB]
  /dev/rhel/home                                [      19.82 GiB]
  9 disks
  4 partitions
  0 LVM physical volume whole disks
  8 LVM physical volumes
pvcreate -ff -y /dev/mapper/3600a09804d543942782b494539436443 2>/dev/null
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA on /dev/mapper/3600a09804d543942782b494539436443 was already found on /dev/mapper/3600a09804d543942782b494539436442.
  WARNING: PV f5SP6p-nu1H-t2MB-SR3Y-f1tl-Jpeq-ZSpbJA prefers device /dev/mapper/3600a09804d543942782b494539436442 because device is used by LV.
  Device /dev/mapper/3600a09804d543942782b494539436443 not found (or ignored by filtering).
 [19266 9e6f0700]i,2,6,Operation::addErrorReport: (1) DiskGroup:/dev/mapper/newtest-1_SdDg_0 6 1952:fabricateDiskGroup failed for DG /dev/mapper/newtest-1_SdDg_0 : pvcreate failed for device /dev/mapper/3600a09804d543942782b494539436443

Comment 2 Zdenek Kabelac 2017-02-14 16:04:18 UTC
Please attach full '-vvvv' output.

But also make sure you are working in system with properly configured device - i.e. no duplicate device (misconfigured multipath?) are  present in your system.

The easy way could be to setup  lvm.conf  global/filter option and accept on valid subset of devices.

Comment 3 David Teigland 2017-02-14 17:04:18 UTC
lvm has recently become much more careful in handling duplicate PVs, and you are seeing the effects of those improvements here.  lvm provides a number of ways to resolve duplicate PVs, but pvcreate -ff has not (so far) been one of them.  We can
look into adding pvcreate -ff to the set of tools for handling duplicate PVs, but at the present time, the possible solutions you can investigate include:

- using a filter with pvcreate to select the device you want (ie comment 2)
- using vgimportclone.
- using 'pvremove -ff' on the duplicate PV, then pvcreate.

Comment 4 Abdul Khumani 2017-02-20 10:39:53 UTC
Created attachment 1255626 [details]
output of pvcreate -vvvv -ff command

Added the log file for pvcreate -vvvv -ff command.

Comment 5 David Teigland 2017-02-20 16:39:47 UTC
Thanks, the behavior is as expected.  Do any of the three methods in comment 3 work for you?

Comment 6 Abdul Khumani 2017-02-22 06:48:45 UTC
Yes using "'pvremove -ff' on the duplicate PV, then doing pvcreate worked for me". Thanks for your help and support.


Note You need to log in before you can comment on or make changes to this bug.