RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1167431 - "Assertion failed: can't _pv_write non-orphan PV" when removing vg containing thin pool stacked on cache volume
Summary: "Assertion failed: can't _pv_write non-orphan PV" when removing vg containing...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-24 18:17 UTC by Corey Marthaler
Modified: 2021-09-03 12:48 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.125-1.el7
Doc Type: Bug Fix
Doc Text:
Bug in removal of cached LV caused vgremove command to fail. Failing code has been fixed.
Clone Of:
Environment:
Last Closed: 2015-11-19 12:45:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
-vvvv of the vgremove (73.27 KB, text/plain)
2014-11-24 18:18 UTC, Corey Marthaler
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2147 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2015-11-19 11:11:07 UTC

Description Corey Marthaler 2014-11-24 18:17:20 UTC
Description of problem:
This may be another version of bug 1161347.



SCENARIO - [attempt_to_create_pool_from_already_cached_vol]

*** Cache info for this scenario ***
*  origin (slow):  /dev/sdc1
*  pool (fast):    /dev/sde1
************************************

Attempt to stack a cache pool on top of a cache volume
Create origin (slow) volume
lvcreate -L 4G -n stack1 cache_sanity /dev/sdc1

Create cache data and cache metadata (fast) volumes
lvcreate -L 2G -n pool cache_sanity /dev/sde1
lvcreate -L 8M -n pool_meta cache_sanity /dev/sde1

Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting logical volume cache_sanity/pool and cache_sanity/pool_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachepool cache_sanity/pool cache_sanity/stack1

lvcreate -L 8M -n meta2 cache_sanity /dev/sde1
Attempt to create a new cache pool volume by combining an existing cached volume the new cache metadata volumes
lvconvert --yes --type cache-pool --poolmetadata cache_sanity/meta2 cache_sanity/stack1
  Cached LV cache_sanity/stack1 could be only converted into a thin pool volume.

[root@host-116 ~]# lvconvert --yes --type thin-pool --poolmetadata cache_sanity/meta2 cache_sanity/stack1
  WARNING: Converting logical volume cache_sanity/stack1 and cache_sanity/meta2 to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/stack1 to thin pool.
[root@host-116 ~]# lvs -a -o +devices
  LV                   Attr       LSize   Pool   Origin               Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]      ewi-------   8.00m                                                   /dev/sdb1(0)
  [pool]               Cwi---C---   2.00g                             0.02   5.27  0.00     pool_cdata(0)
  [pool_cdata]         Cwi-ao----   2.00g                                                   /dev/sde1(0)
  [pool_cmeta]         ewi-ao----   8.00m                                                   /dev/sde1(512)
  stack1               twi-a-tz--   4.00g                             0.00   0.63           stack1_tdata(0)
  [stack1_tdata]       Cwi-aoC---   4.00g [pool] [stack1_tdata_corig] 0.02   5.27  0.00     stack1_tdata_corig(0)
  [stack1_tdata_corig] owi-aoC---   4.00g                                                   /dev/sdc1(0)
  [stack1_tmeta]       ewi-ao----   8.00m                                                   /dev/sde1(514)

[root@host-116 ~]# vgremove -f cache_sanity
  Logical volume "stack1" successfully removed
  Assertion failed: can't _pv_write non-orphan PV (in VG )
  Failed to remove physical volume "/dev/sdb1" from volume group "cache_sanity"
  Volume group "cache_sanity" not properly removed


#format_text/format-text.c:992         Renaming /etc/lvm/backup/cache_sanity.tmp to /etc/lvm/backup/cache_sanity
#metadata/lv_manip.c:5551   Logical volume "stack1" successfully removed
#misc/lvm-flock.c:200       Locking /run/lock/lvm/P_orphans WB
#libdm-common.c:903         Preparing SELinux context for /run/lock/lvm/P_orphans to system_u:object_r:lvm_lock_t:s0.
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_orphans:aux WB
#misc/lvm-flock.c:101         _do_flock /run/lock/lvm/P_orphans WB
#misc/lvm-flock.c:48         _undo_flock /run/lock/lvm/P_orphans:aux
#libdm-common.c:906         Resetting SELinux context to default value.
#cache/lvmcache.c:439         Metadata cache has no info for vgname: "#orphans"
#metadata/metadata.c:584     Removing physical volume "/dev/sdb1" from volume group "cache_sanity"
#device/dev-io.c:313       /dev/sdb1: size is 31455207 sectors
#metadata/metadata.c:4088   Assertion failed: can't _pv_write non-orphan PV (in VG )
#metadata/metadata.c:598   Failed to remove physical volume "/dev/sdb1" from volume group "cache_sanity"
#metadata/metadata.c:584     Removing physical volume "/dev/sdc1" from volume group "cache_sanity"
#device/dev-io.c:313       /dev/sdc1: size is 31455207 sectors
#format_text/format-text.c:1342         Creating metadata area on /dev/sdc1 at sector 8 size 2040 sectors
#format_text/text_label.c:184         /dev/sdc1: Preparing PV label header aO42AM-wMfF-om3m-s0Wo-14a0-7LmX-vl7E0k size 16105065984 with da1 (2048s, 0s) mda1 (8s, 2040s)
#label/label.c:328       /dev/sdc1: Writing label to sector 1 with stored offset 32.
#cache/lvmetad.c:816         Telling lvmetad to store PV /dev/sdc1 (aO42AM-wMfF-om3m-s0Wo-14a0-7LmX-vl7E0k)



Version-Release number of selected component (if applicable):
3.10.0-206.el7.x86_64

lvm2-2.02.112-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
lvm2-libs-2.02.112-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
lvm2-cluster-2.02.112-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-1.02.91-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-libs-1.02.91-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-event-1.02.91-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-event-libs-1.02.91-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-persistent-data-0.4.1-2.el7    BUILT: Wed Nov 12 12:39:46 CST 2014
cmirror-2.02.112-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014


How reproducible:
Everytime

Comment 1 Corey Marthaler 2014-11-24 18:18:20 UTC
Created attachment 960885 [details]
-vvvv of the vgremove

Comment 3 Zdenek Kabelac 2015-02-09 15:18:53 UTC
I'm conviced this bug was already fixed with cache fixed in 2.02.113 - so version 2.02.115 should work.

Comment 7 Corey Marthaler 2015-08-05 23:39:51 UTC
Marking verified. I'm no longer able to reproduce this with the latest rpms.


3.10.0-302.el7.x86_64
lvm2-2.02.126-1.el7    BUILT: Tue Jul 28 11:32:33 CDT 2015
lvm2-libs-2.02.126-1.el7    BUILT: Tue Jul 28 11:32:33 CDT 2015
lvm2-cluster-2.02.126-1.el7    BUILT: Tue Jul 28 11:32:33 CDT 2015
device-mapper-1.02.103-1.el7    BUILT: Tue Jul 28 11:32:33 CDT 2015
device-mapper-libs-1.02.103-1.el7    BUILT: Tue Jul 28 11:32:33 CDT 2015
device-mapper-event-1.02.103-1.el7    BUILT: Tue Jul 28 11:32:33 CDT 2015
device-mapper-event-libs-1.02.103-1.el7    BUILT: Tue Jul 28 11:32:33 CDT 2015
device-mapper-persistent-data-0.5.4-1.el7    BUILT: Fri Jul 17 08:56:22 CDT 2015
cmirror-2.02.126-1.el7    BUILT: Tue Jul 28 11:32:33 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.126-1.el7    BUILT: Tue Jul 28 11:32:33 CDT 2015



SCENARIO - [attempt_to_create_pool_from_already_cached_vol]

*** Cache info for this scenario ***
*  origin (slow):  /dev/mapper/mpathe2
*  pool (fast):    /dev/mapper/mpathe1
************************************

Attempt to stack a cache pool on top of a cache volume
Create origin (slow) volume
lvcreate -L 4G -n stack1 cache_sanity /dev/mapper/mpathe2

Create cache data and cache metadata (fast) volumes
lvcreate -L 2G -n pool cache_sanity /dev/mapper/mpathe1
lvcreate -L 12M -n pool_meta cache_sanity /dev/mapper/mpathe1

Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --cachemode writethrough -c 64 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting logical volume cache_sanity/pool and cache_sanity/pool_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachepool cache_sanity/pool cache_sanity/stack1
Changing cache policy to cleaner

lvcreate -L 8M -n meta2 cache_sanity /dev/mapper/mpathe1
Attempt to create a new cache pool volume by combining an existing cached volume and the new metadata volume
lvconvert --yes --type cache-pool --poolmetadata cache_sanity/meta2 cache_sanity/stack1
  Cached LV cache_sanity/stack1 could be only converted into a thin pool volume.

Now create a new thin pool volume by combining the existing cached volume and the new metadata volume
lvconvert --yes --type thin-pool --poolmetadata cache_sanity/meta2 cache_sanity/stack1

[root@harding-03 ~]# lvconvert --yes --type thin-pool --poolmetadata cache_sanity/meta2 cache_sanity/stack1
  WARNING: Converting logical volume cache_sanity/stack1 and cache_sanity/meta2 to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/stack1 to thin pool.
[root@harding-03 ~]# vgremove -f cache_sanity
  Logical volume "stack1" successfully removed
  Volume group "cache_sanity" successfully removed

Comment 8 errata-xmlrpc 2015-11-19 12:45:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2147.html


Note You need to log in before you can comment on or make changes to this bug.