RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1064374 - pvremove of a PV on thin-volume fails thinking pool device is a PV too (with duplicate UUID as the real PV)
Summary: pvremove of a PV on thin-volume fails thinking pool device is a PV too (with ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On: 1032635
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-12 13:49 UTC by Marian Csontos
Modified: 2021-09-08 20:24 UTC (History)
14 users (show)

Fixed In Version: lvm2-2.02.105-12.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 1032635
Environment:
Last Closed: 2014-06-13 11:25:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Marian Csontos 2014-02-12 13:49:25 UTC
LVM should itself filter out internal devices as we often ask others to do.

In this case the workaround is to overwrite start of the thin LV used as PV with zeros.

Still applies to upstream/el7 version.

+++ This bug was initially created as a clone of Bug #1032635 +++

Description of problem:
After creating PV on the first thin-LV in the pool, removing the PV fails as the pool itself has PV signature visible and is considered a PV by our tools.

This seems to be caused by recent cache removals.

Version-Release number of selected component (if applicable):
lvm2-2.02.105-0.151.el6.x86_64 - upstream lvm2.


How reproducible:
100%

Steps to Reproduce:
1. #create a pool vg/pool
2. #create a LV vg/lv in the pool.
3. pvcreate /dev/vg/lv
4. pvremove /dev/vg/lv

Actual results:
`pvremove /dev/vg/lv` fails with following message:

     Found duplicate PV 3rYjdZ2W5aTWcI9j1xaQfDK2fIRjC5e0: using /dev/vg/lv not /dev/mapper/vg-pool
     Found duplicate PV 3rYjdZ2W5aTWcI9j1xaQfDK2fIRjC5e0: using /dev/mapper/vg-pool not /dev/vg/lv
     Internal error: Physical Volume /dev/vg/lv has a label, but is neither in a VG nor orphan.  

Expected results:
pvremove should pass

Additional info:

--- Additional comment from Petr Rockai on 2013-11-20 10:34:53 EST ---

I reckon that this is something that filters should be taking care of. Thin pools shouldn't be scanned for PV labels.

--- Additional comment from Marian Csontos on 2013-11-20 10:42:27 EST ---

We are talking here about lvm consuming lvm devices, not some 3rd party. I really do not think user should have to check and modify filters after creating a pool and I am sure it is just my misunderstanding of what you wanted to say.

Comment 2 Corey Marthaler 2014-03-07 23:28:26 UTC
This is affecting our tests.


3.10.0-97.el7.x86_64
lvm2-2.02.105-7.el7    BUILT: Wed Feb 26 09:29:34 CST 2014
lvm2-libs-2.02.105-7.el7    BUILT: Wed Feb 26 09:29:34 CST 2014
lvm2-cluster-2.02.105-7.el7    BUILT: Wed Feb 26 09:29:34 CST 2014
device-mapper-1.02.84-7.el7    BUILT: Wed Feb 26 09:29:34 CST 2014
device-mapper-libs-1.02.84-7.el7    BUILT: Wed Feb 26 09:29:34 CST 2014
device-mapper-event-1.02.84-7.el7    BUILT: Wed Feb 26 09:29:34 CST 2014
device-mapper-event-libs-1.02.84-7.el7    BUILT: Wed Feb 26 09:29:34 CST 2014
device-mapper-persistent-data-0.2.8-4.el7    BUILT: Fri Jan 24 14:28:55 CST 2014
cmirror-2.02.105-7.el7    BUILT: Wed Feb 26 09:29:34 CST 2014


SCENARIO - [stacked_snaps]
Stack snapshots on top of existing snapshots
Setting up base level origin/snapshot
lvcreate --thinpool POOL --zero y -L 1G snapper_thinp
Sanity checking pool device metadata
(thin_check /dev/mapper/snapper_thinp-POOL_tmeta)
examining superblock
examining devices tree
examining mapping tree
lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin
lvcreate -V 1G -T snapper_thinp/POOL -n other1
lvcreate -V 1G -T snapper_thinp/POOL -n other2
lvcreate --virtualsize 1G -T snapper_thinp/POOL -n other3
lvcreate -V 1G -T snapper_thinp/POOL -n other4
lvcreate -V 1G -T snapper_thinp/POOL -n other5
lvcreate -K -s /dev/snapper_thinp/origin -n snap_level1
Creating stacked level PV/VG
pvcreate /dev/snapper_thinp/origin
vgcreate snapper_thinp_stack /dev/snapper_thinp/origin
  Found duplicate PV MT141wVf5orT4VNQv6dRDfqkVaRiidrV: using /dev/snapper_thinp/origin not /dev/mapper/snapper_thinp-POOL
  Found duplicate PV MT141wVf5orT4VNQv6dRDfqkVaRiidrV: using /dev/mapper/snapper_thinp-POOL not /dev/snapper_thinp/origin
  Found duplicate PV MT141wVf5orT4VNQv6dRDfqkVaRiidrV: using /dev/snapper_thinp/origin not /dev/mapper/snapper_thinp-POOL
Creating stacked level origin/snapshot
lvcreate -L 100M snapper_thinp_stack -n origin
  Found duplicate PV MT141wVf5orT4VNQv6dRDfqkVaRiidrV: using /dev/snapper_thinp/origin not /dev/mapper/snapper_thinp-POOL
lvcreate -s /dev/snapper_thinp_stack/origin -n snap_level2 -L 50M
  Found duplicate PV MT141wVf5orT4VNQv6dRDfqkVaRiidrV: using /dev/snapper_thinp/origin not /dev/mapper/snapper_thinp-POOL
Removing stacked level origin/snapshot
lvremove -f /dev/snapper_thinp_stack/snap_level2
  Found duplicate PV MT141wVf5orT4VNQv6dRDfqkVaRiidrV: using /dev/snapper_thinp/origin not /dev/mapper/snapper_thinp-POOL
lvremove -f /dev/snapper_thinp_stack/origin
  Found duplicate PV MT141wVf5orT4VNQv6dRDfqkVaRiidrV: using /dev/snapper_thinp/origin not /dev/mapper/snapper_thinp-POOL
Removing stacked level VG/PV
vgremove snapper_thinp_stack
  Found duplicate PV MT141wVf5orT4VNQv6dRDfqkVaRiidrV: using /dev/snapper_thinp/origin not /dev/mapper/snapper_thinp-POOL
pvremove /dev/snapper_thinp/origin
  Found duplicate PV MT141wVf5orT4VNQv6dRDfqkVaRiidrV: using /dev/mapper/snapper_thinp-POOL not /dev/snapper_thinp/origin
  Internal error: Physical Volume /dev/snapper_thinp/origin has a label, but is neither in a VG nor orphan.
unable to remove PV on level1 origin

Comment 4 Marian Csontos 2014-03-10 08:26:50 UTC
The possible workaround is to create small zeroed "header" LV in the pool first and keep it there so no process can create something looking like "valid" header there.

Comment 5 Zdenek Kabelac 2014-03-11 14:41:13 UTC
Ok - we will try to add fix based on usage of UUID.
Private device will have longer UUID (with private suffixes)
Such device will be excluded to be used as valid lvm2 devices.

Comment 6 Zdenek Kabelac 2014-03-12 13:40:01 UTC
Problem is solved upstream by these patches:

https://www.redhat.com/archives/lvm-devel/2014-March/msg00075.html
https://www.redhat.com/archives/lvm-devel/2014-March/msg00076.html
https://www.redhat.com/archives/lvm-devel/2014-March/msg00079.html

With these patches it will add -pool suffix even for 'public'  pool LV,
so device_is_usable() is able to detect longer UUID with suffix and easily skip such volume.

The patch also fixes other problem with default activation of thin volume, where it will let use local exclusive activation of -aay.

Comment 8 Corey Marthaler 2014-03-12 21:11:25 UTC
Fix verified in the latest rpms.

3.10.0-84.el7.x86_64
lvm2-2.02.105-12.el7    BUILT: Wed Mar 12 10:49:52 CDT 2014
lvm2-libs-2.02.105-12.el7    BUILT: Wed Mar 12 10:49:52 CDT 2014
lvm2-cluster-2.02.105-12.el7    BUILT: Wed Mar 12 10:49:52 CDT 2014
device-mapper-1.02.84-12.el7    BUILT: Wed Mar 12 10:49:52 CDT 2014
device-mapper-libs-1.02.84-12.el7    BUILT: Wed Mar 12 10:49:52 CDT 2014
device-mapper-event-1.02.84-12.el7    BUILT: Wed Mar 12 10:49:52 CDT 2014
device-mapper-event-libs-1.02.84-12.el7    BUILT: Wed Mar 12 10:49:52 CDT 2014
device-mapper-persistent-data-0.2.8-4.el7    BUILT: Fri Jan 24 14:28:55 CST 2014
cmirror-2.02.105-12.el7    BUILT: Wed Mar 12 10:49:52 CDT 2014


SCENARIO - [stacked_snaps]
Stack snapshots on top of existing snapshots
Setting up base level origin/snapshot
lvcreate --thinpool POOL --zero n -L 1G snapper_thinp
Sanity checking pool device metadata
(thin_check /dev/mapper/snapper_thinp-POOL_tmeta)
examining superblock
examining devices tree
examining mapping tree
lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin
lvcreate -V 1G -T snapper_thinp/POOL -n other1
lvcreate --virtualsize 1G -T snapper_thinp/POOL -n other2
lvcreate --virtualsize 1G -T snapper_thinp/POOL -n other3
lvcreate --virtualsize 1G -T snapper_thinp/POOL -n other4
lvcreate -V 1G -T snapper_thinp/POOL -n other5
lvcreate -K -s /dev/snapper_thinp/origin -n snap_level1
Creating stacked level PV/VG
Creating stacked level origin/snapshot
lvcreate -L 100M snapper_thinp_stack -n origin
lvcreate -s /dev/snapper_thinp_stack/origin -n snap_level2 -L 50M
Removing stacked level origin/snapshot
Removing stacked level VG/PV
Removing base level VG/PV
Removing snap volume snapper_thinp/snap_level1
lvremove -f /dev/snapper_thinp/snap_level1
Removing thin origin and other virtual thin volumes
Removing thinpool snapper_thinp/POOL

Comment 9 Ludek Smid 2014-06-13 11:25:23 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.