RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1710096 - "Assertion failed: can't _pv_write non-orphan PV (in VG )" after cache vgsplit left an orphaned lvol0_pmspare volume
Summary: "Assertion failed: can't _pv_write non-orphan PV (in VG )" after cache vgspl...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.7
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1899263
TreeView+ depends on / blocked
 
Reported: 2019-05-14 21:01 UTC by Corey Marthaler
Modified: 2021-09-03 12:53 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1899263 (view as bug list)
Environment:
Last Closed: 2020-11-18 19:11:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2019-05-14 21:01:41 UTC
Description of problem:
This is a side effect of RFE 1102949. 


[root@hayes-03 ~]# lvcreate -L 1G -n origin seven /dev/sde1
  Volume group "seven" not found
  Cannot process volume group seven
[root@hayes-03 ~]# vgcreate seven /dev/sd[bcdefghi]1
  Volume group "seven" successfully created
[root@hayes-03 ~]# lvcreate -L 1G -n origin seven /dev/sde1
  Logical volume "origin" created.
[root@hayes-03 ~]# lvcreate -L 200M -n cache seven /dev/sdi1
  Logical volume "cache" created.
[root@hayes-03 ~]# lvcreate -L 8M -n cache_meta seven /dev/sdi1
  Logical volume "cache_meta" created.
[root@hayes-03 ~]# lvconvert --yes --type cache-pool --poolmetadata seven/cache_meta seven/cache
  WARNING: Converting seven/cache and seven/cache_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted seven/cache and seven/cache_meta to cache pool.
[root@hayes-03 ~]# lvconvert --yes --type cache --cachepool seven/cache seven/origin
  Logical volume seven/origin is now cached.
[root@hayes-03 ~]# lvs -a -o +devices
  Unknown feature in status: 8 18/2048 128 9/3200 0 49 0 0 0 9 0 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 smq 0 rw - 
  Unknown feature in status: 8 18/2048 128 9/3200 0 49 0 0 0 9 0 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 smq 0 rw - 
  LV              VG    Attr       LSize   Pool    Origin         Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  [cache]         seven Cwi---C--- 200.00m                        0.28   0.88            0.00             cache_cdata(0) 
  [cache_cdata]   seven Cwi-ao---- 200.00m                                                                /dev/sdi1(0)   
  [cache_cmeta]   seven ewi-ao----   8.00m                                                                /dev/sdi1(50)  
  [lvol0_pmspare] seven ewi-------   8.00m                                                                /dev/sdb1(0)   
  origin          seven Cwi-a-C---   1.00g [cache] [origin_corig] 0.28   0.88            0.00             origin_corig(0)
  [origin_corig]  seven owi-aoC---   1.00g                                                                /dev/sde1(0)   


[root@hayes-03 ~]# vgchange -an seven
  0 logical volume(s) in volume group "seven" now active

[root@hayes-03 ~]# vgsplit seven ten /dev/sde1 /dev/sdi1
  New volume group "ten" successfully split from "seven"

[root@hayes-03 ~]# vgs
  VG    #PV #LV #SN Attr   VSize VFree
  seven   6   0   0 wz--n- 6.76t 6.76t
  ten     2   1   0 wz--n- 2.25t 2.25t
[root@hayes-03 ~]# pvscan
  PV /dev/sde1   VG ten             lvm2 [446.62 GiB / 445.62 GiB free]
  PV /dev/sdi1   VG ten             lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdb1   VG seven           lvm2 [446.62 GiB / 446.61 GiB free]
  PV /dev/sdc1   VG seven           lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdd1   VG seven           lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdf1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 8 [<9.02 TiB] / in use: 8 [<9.02 TiB] / in no VG: 0 [0   ]

[root@hayes-03 ~]# vgremove seven
  Assertion failed: can't _pv_write non-orphan PV (in VG )
  Failed to remove physical volume "/dev/sdb1" from volume group "seven"
  Volume group "seven" not properly removed

[root@hayes-03 ~]# pvremove /dev/sdb1
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  PV /dev/sdb1 is used by a VG but its metadata is missing.
  (If you are certain you need pvremove, then confirm by using --force twice.)
  /dev/sdb1: physical volume label not removed.

[root@hayes-03 ~]# pvremove -ff /dev/sdb1
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  WARNING: PV /dev/sdb1 is used by VG <unknown>.
Really WIPE LABELS from physical volume "/dev/sdb1" of volume group "<unknown>" [y/n]? y
  WARNING: Wiping physical volume label from /dev/sdb1 of volume group "<unknown>".
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  Labels on physical volume "/dev/sdb1" successfully wiped.



Version-Release number of selected component (if applicable):
3.10.0-1046.el7.x86_64

lvm2-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
lvm2-libs-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
lvm2-cluster-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
lvm2-lockd-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
lvm2-python-boom-0.9-17.el7.2    BUILT: Mon May 13 04:37:00 CDT 2019
cmirror-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-libs-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-event-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-event-libs-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-persistent-data-0.8.1-1.el7    BUILT: Sat May  4 14:53:53 CDT 2019


How reproducible:
Everytime

Comment 2 Zdenek Kabelac 2020-11-18 19:11:00 UTC
Cloned as bug 1899263 for RH8 evaluation. RH7 is already closed for devel.


Note You need to log in before you can comment on or make changes to this bug.