Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1899263

Summary: "Assertion failed: can't _pv_write non-orphan PV (in VG )" after cache vgsplit left an orphaned lvol0_pmspare volume
Product: Red Hat Enterprise Linux 8 Reporter: Zdenek Kabelac <zkabelac>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Command-line tools QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: agk, cluster-qe, cmarthal, heinzm, jbrassow, mcsontos, msnitzer, prajnoha, thornber, zkabelac
Version: 8.4Keywords: Triaged
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: 8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.03.12-6.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1710096 Environment:
Last Closed: 2021-11-09 19:45:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1710096    
Bug Blocks:    

Description Zdenek Kabelac 2020-11-18 19:04:48 UTC
+++ This bug was initially created as a clone of Bug #1710096 +++

Description of problem:
This is a side effect of RFE 1102949. 


[root@hayes-03 ~]# lvcreate -L 1G -n origin seven /dev/sde1
  Volume group "seven" not found
  Cannot process volume group seven
[root@hayes-03 ~]# vgcreate seven /dev/sd[bcdefghi]1
  Volume group "seven" successfully created
[root@hayes-03 ~]# lvcreate -L 1G -n origin seven /dev/sde1
  Logical volume "origin" created.
[root@hayes-03 ~]# lvcreate -L 200M -n cache seven /dev/sdi1
  Logical volume "cache" created.
[root@hayes-03 ~]# lvcreate -L 8M -n cache_meta seven /dev/sdi1
  Logical volume "cache_meta" created.
[root@hayes-03 ~]# lvconvert --yes --type cache-pool --poolmetadata seven/cache_meta seven/cache
  WARNING: Converting seven/cache and seven/cache_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted seven/cache and seven/cache_meta to cache pool.
[root@hayes-03 ~]# lvconvert --yes --type cache --cachepool seven/cache seven/origin
  Logical volume seven/origin is now cached.
[root@hayes-03 ~]# lvs -a -o +devices
  Unknown feature in status: 8 18/2048 128 9/3200 0 49 0 0 0 9 0 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 smq 0 rw - 
  Unknown feature in status: 8 18/2048 128 9/3200 0 49 0 0 0 9 0 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 smq 0 rw - 
  LV              VG    Attr       LSize   Pool    Origin         Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  [cache]         seven Cwi---C--- 200.00m                        0.28   0.88            0.00             cache_cdata(0) 
  [cache_cdata]   seven Cwi-ao---- 200.00m                                                                /dev/sdi1(0)   
  [cache_cmeta]   seven ewi-ao----   8.00m                                                                /dev/sdi1(50)  
  [lvol0_pmspare] seven ewi-------   8.00m                                                                /dev/sdb1(0)   
  origin          seven Cwi-a-C---   1.00g [cache] [origin_corig] 0.28   0.88            0.00             origin_corig(0)
  [origin_corig]  seven owi-aoC---   1.00g                                                                /dev/sde1(0)   


[root@hayes-03 ~]# vgchange -an seven
  0 logical volume(s) in volume group "seven" now active

[root@hayes-03 ~]# vgsplit seven ten /dev/sde1 /dev/sdi1
  New volume group "ten" successfully split from "seven"

[root@hayes-03 ~]# vgs
  VG    #PV #LV #SN Attr   VSize VFree
  seven   6   0   0 wz--n- 6.76t 6.76t
  ten     2   1   0 wz--n- 2.25t 2.25t
[root@hayes-03 ~]# pvscan
  PV /dev/sde1   VG ten             lvm2 [446.62 GiB / 445.62 GiB free]
  PV /dev/sdi1   VG ten             lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdb1   VG seven           lvm2 [446.62 GiB / 446.61 GiB free]
  PV /dev/sdc1   VG seven           lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdd1   VG seven           lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdf1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 8 [<9.02 TiB] / in use: 8 [<9.02 TiB] / in no VG: 0 [0   ]

[root@hayes-03 ~]# vgremove seven
  Assertion failed: can't _pv_write non-orphan PV (in VG )
  Failed to remove physical volume "/dev/sdb1" from volume group "seven"
  Volume group "seven" not properly removed

[root@hayes-03 ~]# pvremove /dev/sdb1
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  PV /dev/sdb1 is used by a VG but its metadata is missing.
  (If you are certain you need pvremove, then confirm by using --force twice.)
  /dev/sdb1: physical volume label not removed.

[root@hayes-03 ~]# pvremove -ff /dev/sdb1
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  WARNING: PV /dev/sdb1 is used by VG <unknown>.
Really WIPE LABELS from physical volume "/dev/sdb1" of volume group "<unknown>" [y/n]? y
  WARNING: Wiping physical volume label from /dev/sdb1 of volume group "<unknown>".
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  WARNING: PV /dev/sdb1 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb1 might need repairing.
  Labels on physical volume "/dev/sdb1" successfully wiped.



Version-Release number of selected component (if applicable):
3.10.0-1046.el7.x86_64

lvm2-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
lvm2-libs-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
lvm2-cluster-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
lvm2-lockd-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
lvm2-python-boom-0.9-17.el7.2    BUILT: Mon May 13 04:37:00 CDT 2019
cmirror-2.02.185-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-libs-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-event-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-event-libs-1.02.158-1.el7    BUILT: Mon May 13 04:36:30 CDT 2019
device-mapper-persistent-data-0.8.1-1.el7    BUILT: Sat May  4 14:53:53 CDT 2019


How reproducible:
Everytime

Comment 1 Zdenek Kabelac 2021-01-24 22:57:26 UTC
Should be addressed by this upstream patch:

https://www.redhat.com/archives/lvm-devel/2021-January/msg00020.html

Comment 4 Corey Marthaler 2021-06-01 18:56:39 UTC
This is not yet fixed in the latest 8.5 lvm2 build.

kernel-4.18.0-305.7.el8.kpq1    BUILT: Mon May 17 12:55:07 CDT 2021
lvm2-2.03.12-2.el8    BUILT: Tue Jun  1 06:55:37 CDT 2021
lvm2-libs-2.03.12-2.el8    BUILT: Tue Jun  1 06:55:37 CDT 2021
lvm2-dbusd-2.03.12-1.el8    BUILT: Sat May 22 01:54:03 CDT 2021


[root@hayes-01 ~]# vgcreate  seven /dev/sd[bcdefghij]1
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/sdc1" successfully created.
  Physical volume "/dev/sdd1" successfully created.
  Physical volume "/dev/sde1" successfully created.
  Physical volume "/dev/sdf1" successfully created.
  Physical volume "/dev/sdg1" successfully created.
  Physical volume "/dev/sdh1" successfully created.
  Physical volume "/dev/sdi1" successfully created.
  Physical volume "/dev/sdj1" successfully created.
  Volume group "seven" successfully created
[root@hayes-01 ~]# lvcreate -L 1G -n origin seven /dev/sde1
  Logical volume "origin" created.
[root@hayes-01 ~]# lvcreate -L 200M -n cache seven /dev/sdi1
  Logical volume "cache" created.
[root@hayes-01 ~]# lvcreate -L 8M -n cache_meta seven /dev/sdi1
  Logical volume "cache_meta" created.
[root@hayes-01 ~]# lvconvert --yes --type cache-pool --poolmetadata seven/cache_meta seven/cache
  WARNING: Converting seven/cache and seven/cache_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted seven/cache and seven/cache_meta to cache pool.
[root@hayes-01 ~]# lvconvert --yes --type cache --cachepool seven/cache seven/origin
  Logical volume seven/origin is now cached.

[root@hayes-01 ~]# lvs -a -o +devices
  LV                  VG    Attr       LSize   Pool          Origin         Data%  Meta%  Move Log Cpy%Sync Convert Devices             
  [cache_cpool]       seven Cwi---C--- 200.00m                              0.25   0.88            0.00             cache_cpool_cdata(0)
  [cache_cpool_cdata] seven Cwi-ao---- 200.00m                                                                      /dev/sdi1(0)        
  [cache_cpool_cmeta] seven ewi-ao----   8.00m                                                                      /dev/sdi1(50)       
  [lvol0_pmspare]     seven ewi-------   8.00m                                                                      /dev/sdb1(0)        
  origin              seven Cwi-a-C---   1.00g [cache_cpool] [origin_corig] 0.25   0.88            0.00             origin_corig(0)     
  [origin_corig]      seven owi-aoC---   1.00g                                                                      /dev/sde1(0)        

[root@hayes-01 ~]# vgchange -an seven
  0 logical volume(s) in volume group "seven" now active
[root@hayes-01 ~]# vgsplit seven ten /dev/sde1 /dev/sdi1
  New volume group "ten" successfully split from "seven"
[root@hayes-01 ~]# vgs
  VG    #PV #LV #SN Attr   VSize  VFree 
  seven   7   0   0 wz--n- 12.73t 12.73t
  ten     2   1   0 wz--n- <3.64t <3.64t

[root@hayes-01 ~]# pvscan
  PV /dev/sde1   VG ten             lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdi1   VG ten             lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdb1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdc1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdd1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdj1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 9 [<16.37 TiB] / in use: 9 [<16.37 TiB] / in no VG: 0 [0   ]

[root@hayes-01 ~]# vgremove seven
  Assertion failed: can't _pv_write non-orphan PV (in VG )
  Failed to remove physical volume "/dev/sdb1" from volume group "seven"
  Volume group "seven" not properly removed

[root@hayes-01 ~]# pvremove /dev/sdb1
  PV /dev/sdb1 is used by a VG but its metadata is missing.
  (If you are certain you need pvremove, then confirm by using --force twice.)
  /dev/sdb1: physical volume label not removed.

[root@hayes-01 ~]# pvremove -ff /dev/sdb1
  WARNING: PV /dev/sdb1 is used by VG <unknown>.
Really WIPE LABELS from physical volume "/dev/sdb1" of volume group "<unknown>" [y/n]? y
  WARNING: Wiping physical volume label from /dev/sdb1 of volume group "<unknown>".
  Labels on physical volume "/dev/sdb1" successfully wiped.

Comment 7 Zdenek Kabelac 2021-07-21 14:16:38 UTC
So there was yet another bug in the handling of pmspare volume in a VG.

Fixing patches provided:

1.)

vgremove can now remove even 'standalone' _pmspare in VG
(which can be achieved i.e.  vgcfgrestore)

https://listman.redhat.com/archives/lvm-devel/2021-July/msg00019.html


2.) 

vgsplit missed to handle pmspare for both VGs - so with this patch:

https://listman.redhat.com/archives/lvm-devel/2021-July/msg00020.html


vgsplit gains support for option --poolmetadataspare  y|n
and can remove pmspare in VG if there is no longer a pool volume after split
and new  VG can have pmspare if it's been move there any pool volume.


tested by this patch:

https://listman.redhat.com/archives/lvm-devel/2021-July/msg00021.html

Comment 8 Marian Csontos 2021-07-23 09:19:01 UTC
This breaks the vgsplit-thin test. The VG is split fine, but attempt to merge them back fails on duplicate pmspare LVs

233 [ 0:02] #vgsplit-thin.sh:43+ lvs -a -o+devices LVMTEST304186vg1 LVMTEST304186vg2
234 [ 0:02]   LV              VG               Attr       LSize  Pool  Origin  Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                                    
235 [ 0:02]   LV2             LVMTEST304186vg1 Vwi---tz-- 12.00m pool2                                                                                                                            
236 [ 0:02]   [lvol0_pmspare] LVMTEST304186vg1 ewi-------  4.00m                                                       /dev/shm/lvm2-slave/LVMTEST304186.M4OH7i7vcy/dev/mapper/LVMTEST304186pv3(2)
237 [ 0:02]   pool2           LVMTEST304186vg1 twi---tz--  8.00m                                                       pool2_tdata(0)                                                             
238 [ 0:02]   [pool2_tdata]   LVMTEST304186vg1 Twi-------  8.00m                                                       /dev/shm/lvm2-slave/LVMTEST304186.M4OH7i7vcy/dev/mapper/LVMTEST304186pv3(0)
239 [ 0:02]   [pool2_tmeta]   LVMTEST304186vg1 ewi-------  4.00m                                                       /dev/shm/lvm2-slave/LVMTEST304186.M4OH7i7vcy/dev/mapper/LVMTEST304186pv4(0)
240 [ 0:02]   LV1             LVMTEST304186vg2 owi---tz-- 12.00m pool1                                                                                                                            
241 [ 0:02]   LV3             LVMTEST304186vg2 Vwi---tz--  4.00m pool1 eorigin                                                                                                                    
242 [ 0:02]   eorigin         LVMTEST304186vg2 ori-------  4.00m                                                       /dev/shm/lvm2-slave/LVMTEST304186.M4OH7i7vcy/dev/mapper/LVMTEST304186pv5(0)
243 [ 0:02]   [lvol0_pmspare] LVMTEST304186vg2 ewi-------  4.00m                                                       /dev/shm/lvm2-slave/LVMTEST304186.M4OH7i7vcy/dev/mapper/LVMTEST304186pv1(0)
244 [ 0:02]   pool1           LVMTEST304186vg2 twi---tz--  8.00m                                                       pool1_tdata(0)                                                             
245 [ 0:02]   [pool1_tdata]   LVMTEST304186vg2 Twi-------  8.00m                                                       /dev/shm/lvm2-slave/LVMTEST304186.M4OH7i7vcy/dev/mapper/LVMTEST304186pv1(1)
246 [ 0:02]   [pool1_tmeta]   LVMTEST304186vg2 ewi-------  4.00m                                                       /dev/shm/lvm2-slave/LVMTEST304186.M4OH7i7vcy/dev/mapper/LVMTEST304186pv2(0)
247 [ 0:02]   snap            LVMTEST304186vg2 swi---s---  4.00m       LV1                                             /dev/shm/lvm2-slave/LVMTEST304186.M4OH7i7vcy/dev/mapper/LVMTEST304186pv2(1)
248 [ 0:02] 
249 [ 0:02] vgmerge $vg1 $vg2
250 [ 0:02] #vgsplit-thin.sh:45+ vgmerge LVMTEST304186vg1 LVMTEST304186vg2
251 [ 0:02]   Duplicate logical volume name "lvol0_pmspare" in "LVMTEST304186vg1" and "LVMTEST304186vg2"
252 [ 0:02] set +vx; STACKTRACE; set -vx
253 [ 0:02] ##vgsplit-thin.sh:45+ set +vx
254 [ 0:02] ## - /srv/buildbot/lvm2-slave/Fedora_Rawhide_x86_64_KVM/build/test/shell/vgsplit-thin.sh:45
255 [ 0:02] ## 1 STACKTRACE() called from /srv/buildbot/lvm2-slave/Fedora_Rawhide_x86_64_KVM/build/test/shell/vgsplit-thin.sh:45

Comment 9 Zdenek Kabelac 2021-07-23 17:36:55 UTC
So this was a new bug discover due to new enhanced 'vgsplit' -  the problem was the 'vgmerge' was incapable to resolve merging VGs which had  both pool metadata spare volume.

New patch remove 'smaller'  _pmspare:

https://listman.redhat.com/archives/lvm-devel/2021-July/msg00026.html


But still 'vgmerge' should also resolve _pmspare sizing with complaince to option:  --poolmetadataspare y|n 
add with this patch:

https://listman.redhat.com/archives/lvm-devel/2021-July/msg00027.html


Tested upstream with:

https://listman.redhat.com/archives/lvm-devel/2021-July/msg00028.html

Comment 10 Corey Marthaler 2021-08-05 19:37:16 UTC
Marking Verified:Tested with the latest rpms.

kernel-4.18.0-323.el8    BUILT: Wed Jul 14 12:12:22 CDT 2021
lvm2-2.03.12-6.el8    BUILT: Tue Aug  3 07:23:05 CDT 2021
lvm2-libs-2.03.12-6.el8    BUILT: Tue Aug  3 07:23:05 CDT 2021


[root@hayes-03 ~]# lvcreate -L 1G -n origin seven /dev/sde1
  Logical volume "origin" created.
[root@hayes-03 ~]# lvcreate -L 200M -n cache seven /dev/sdi1
  Logical volume "cache" created.
[root@hayes-03 ~]# lvcreate -L 8M -n cache_meta seven /dev/sdi1
  Logical volume "cache_meta" created.
[root@hayes-03 ~]# lvconvert --yes --type cache-pool --poolmetadata seven/cache_meta seven/cache
  WARNING: Converting seven/cache and seven/cache_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted seven/cache and seven/cache_meta to cache pool.
[root@hayes-03 ~]# lvconvert --yes --type cache --cachepool seven/cache seven/origin
  Logical volume seven/origin is now cached.
[root@hayes-03 ~]# lvs -a -o +devices
  LV                  VG    Attr       LSize   Pool          Origin         Data%  Meta%  Move Log Cpy%Sync Convert Devices             
  [cache_cpool]       seven Cwi---C--- 200.00m                              0.25   0.88            0.00             cache_cpool_cdata(0)
  [cache_cpool_cdata] seven Cwi-ao---- 200.00m                                                                      /dev/sdi1(0)        
  [cache_cpool_cmeta] seven ewi-ao----   8.00m                                                                      /dev/sdi1(50)       
  [lvol0_pmspare]     seven ewi-------   8.00m                                                                      /dev/sdb1(0)        
  origin              seven Cwi-a-C---   1.00g [cache_cpool] [origin_corig] 0.25   0.88            0.00             origin_corig(0)     
  [origin_corig]      seven owi-aoC---   1.00g                                                                      /dev/sde1(0)        

[root@hayes-03 ~]# vgchange -an seven
  0 logical volume(s) in volume group "seven" now active
[root@hayes-03 ~]# vgsplit seven ten /dev/sde1 /dev/sdi1
  New volume group "ten" successfully split from "seven"
[root@hayes-03 ~]# vgs
  VG    #PV #LV #SN Attr   VSize   VFree  
  seven  11   0   0 wz--n- <15.86t <15.86t
  ten     2   1   0 wz--n-   2.25t   2.25t

[root@hayes-03 ~]# pvscan
  PV /dev/sde1   VG ten             lvm2 [446.62 GiB / 445.61 GiB free]
  PV /dev/sdi1   VG ten             lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdb1   VG seven           lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdc1   VG seven           lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdd1   VG seven           lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdf1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdj1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdk1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdl1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdm1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdn1   VG seven           lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 13 [18.11 TiB] / in use: 13 [18.11 TiB] / in no VG: 0 [0   ]

[root@hayes-03 ~]# vgremove seven
  Volume group "seven" successfully removed
[root@hayes-03 ~]# pvremove /dev/sdb1
  Labels on physical volume "/dev/sdb1" successfully wiped.

Comment 14 Corey Marthaler 2021-08-20 18:14:54 UTC
Marking verified with the latest kernel and lvm.

kernel-4.18.0-330.el8    BUILT: Mon Aug  9 18:02:28 CDT 2021
lvm2-2.03.12-6.el8    BUILT: Tue Aug  3 07:23:05 CDT 2021
lvm2-libs-2.03.12-6.el8    BUILT: Tue Aug  3 07:23:05 CDT 2021

[root@hayes-03 ~]# lvs -a -o +devices
  LV                  VG    Attr       LSize   Pool          Origin         Data%  Meta%   Cpy%Sync Devices             
  [cache_cpool]       seven Cwi---C--- 200.00m                              0.25   0.88    0.00     cache_cpool_cdata(0)
  [cache_cpool_cdata] seven Cwi-ao---- 200.00m                                                      /dev/sdi1(0)        
  [cache_cpool_cmeta] seven ewi-ao----   8.00m                                                      /dev/sdi1(50)       
  [lvol0_pmspare]     seven ewi-------   8.00m                                                      /dev/sdb1(0)        
  origin              seven Cwi-a-C---   1.00g [cache_cpool] [origin_corig] 0.25   0.88    0.00     origin_corig(0)     
  [origin_corig]      seven owi-aoC---   1.00g                                                      /dev/sde1(0)

[root@hayes-03 ~]# vgchange -an seven
  0 logical volume(s) in volume group "seven" now active
[root@hayes-03 ~]# vgsplit seven ten /dev/sde1 /dev/sdi1
  New volume group "ten" successfully split from "seven"

[root@hayes-03 ~]# vgs
  VG    #PV #LV #SN Attr   VSize   VFree  
  seven  12   0   0 wz--n- <17.68t <17.68t
  ten     2   1   0 wz--n-   2.25t   2.25t

[root@hayes-03 ~]# vgremove seven
  Volume group "seven" successfully removed
[root@hayes-03 ~]# pvremove /dev/sdb1
  Labels on physical volume "/dev/sdb1" successfully wiped.

Comment 17 errata-xmlrpc 2021-11-09 19:45:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4431