RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1664149 - PVs in VGs containing multiple native vdos can not be split even if vdos are on separate devices
Summary: PVs in VGs containing multiple native vdos can not be split even if vdos are ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.0
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: 8.0
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-07 21:10 UTC by Corey Marthaler
Modified: 2021-09-07 11:52 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.03.11-0.2.20201103git8801a86.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 15:01:41 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2019-01-07 21:10:29 UTC
Description of problem:
# 1. vgsplit of a PV in a VG containing a single vdo volume

[root@hayes-02 ~]# vgcreate seven /dev/sdb1 /dev/sdc1 /dev/sdd1
  Volume group "seven" successfully created

# only one vdo on one PV
[root@hayes-02 ~]# lvcreate  --type vdo -n vdoB -L 6G seven /dev/sdc1
  Logical volume "vdoB" created.
[root@hayes-02 ~]# lvs -a -o +devices
  LV             VG    Attr       LSize  Pool   Origin Data%   Devices
  vdoB           seven vwi-a-v--- <2.99g vpool0        0.00    vpool0(0)
  vpool0         seven dwi-ao----  6.00g               50.10   vpool0_vdata(0)
  [vpool0_vdata] seven Dwi-ao----  6.00g                       /dev/sdc1(0)
[root@hayes-02 ~]# dmsetup ls
seven-vpool0    (253:1)
seven-vdoB      (253:2)
seven-vpool0_vdata      (253:0)
[root@hayes-02 ~]# lvchange -an seven

# this PASSES
[root@hayes-02 ~]# vgsplit seven ten /dev/sdc1
  New volume group "ten" successfully split from "seven"
[root@hayes-02 ~]# vgs
  VG    #PV #LV #SN Attr   VSize  VFree 
  seven   2   0   0 wz--n- <3.64t <3.64t
  ten     1   2   0 wz--n- <1.82t  1.81t
[root@hayes-02 ~]# lvs -a -o +devices
  LV             VG  Attr       LSize  Pool   Origin Data%   Devices
  vdoB           ten vwi---v--- <2.99g vpool0                vpool0(0)
  vpool0         ten dwi-------  6.00g                       vpool0_vdata(0)
  [vpool0_vdata] ten Dwi-------  6.00g                       /dev/sdc1(0)




# 2. vgsplit of a PV in a VG containing a multiple vdo volumes on multiple PVs

[root@hayes-02 ~]# vgcreate seven /dev/sdb1 /dev/sdc1 /dev/sdd1
  Volume group "seven" successfully created

# different PV for each vdo
[root@hayes-02 ~]# lvcreate  --type vdo -n vdoB -L 6G seven /dev/sdc1
  Logical volume "vdoB" created.
[root@hayes-02 ~]# lvcreate  --type vdo -n vdoA -L 6G seven /dev/sdd1
  Logical volume "vdoA" created.
[root@hayes-02 ~]# lvs -a -o +devices
  LV             VG    Attr       LSize  Pool   Origin Data%   Devices
  vdoA           seven vwi-a-v--- <2.99g vpool1        0.00    vpool1(0)
  vdoB           seven vwi-a-v--- <2.99g vpool0        0.00    vpool0(0)
  vpool0         seven dwi-ao----  6.00g               50.10   vpool0_vdata(0)
  [vpool0_vdata] seven Dwi-ao----  6.00g                       /dev/sdc1(0)
  vpool1         seven dwi-ao----  6.00g               50.10   vpool1_vdata(0)
  [vpool1_vdata] seven Dwi-ao----  6.00g                       /dev/sdd1(0)
[root@hayes-02 ~]# dmsetup ls
seven-vpool0    (253:1)
seven-vpool1_vdata      (253:3)
seven-vdoB      (253:2)
seven-vdoA      (253:5)
seven-vpool0_vdata      (253:0)
seven-vpool1    (253:4)
[root@hayes-02 ~]# lvchange -an seven

# This is the issue here, each of these operations think vpool1/vpool0 are on multiple PVs, but to the best of my knowledge, they aren't, so each of these operations should work. 

# this FAILS
[root@hayes-02 ~]# vgsplit seven ten /dev/sdc1
  Can't split LV vpool1 between two Volume Groups
[root@hayes-02 ~]# vgsplit seven ten /dev/sdd1
  Can't split LV vpool0 between two Volume Groups




# 3. vgsplit of a PV in a VG containing a multiple vdo volumes on that same PV

[root@hayes-02 ~]# vgcreate seven /dev/sdb1 /dev/sdc1 /dev/sdd1
  Volume group "seven" successfully created

# same PV for each vdo
[root@hayes-02 ~]# lvcreate  --type vdo -n vdoB -L 6G seven /dev/sdc1
  Logical volume "vdoB" created.
[root@hayes-02 ~]# lvcreate  --type vdo -n vdoA -L 6G seven /dev/sdc1
  Logical volume "vdoA" created.
[root@hayes-02 ~]# lvs -a -o +devices
  LV             VG    Attr       LSize  Pool   Origin Data%   Devices
  vdoA           seven vwi-a-v--- <2.99g vpool1        0.00    vpool1(0)
  vdoB           seven vwi-a-v--- <2.99g vpool0        0.00    vpool0(0)
  vpool0         seven dwi-ao----  6.00g               50.10   vpool0_vdata(0)
  [vpool0_vdata] seven Dwi-ao----  6.00g                       /dev/sdc1(0)
  vpool1         seven dwi-ao----  6.00g               50.10   vpool1_vdata(0)
  [vpool1_vdata] seven Dwi-ao----  6.00g                       /dev/sdc1(1536)
[root@hayes-02 ~]# dmsetup ls
seven-vpool0    (253:1)
seven-vpool1_vdata      (253:3)
seven-vdoB      (253:2)
seven-vdoA      (253:5)
seven-vpool0_vdata      (253:0)
seven-vpool1    (253:4)
[root@hayes-02 ~]# lvchange -an seven

# this PASSES
[root@hayes-02 ~]# vgsplit seven ten /dev/sdc1
  New volume group "ten" successfully split from "seven"
[root@hayes-02 ~]# lvs -a -o +devices
  LV             VG  Attr       LSize  Pool   Origin Data%   Devices
  vdoA           ten vwi---v--- <2.99g vpool1                vpool1(0)
  vdoB           ten vwi---v--- <2.99g vpool0                vpool0(0)
  vpool0         ten dwi-------  6.00g                       vpool0_vdata(0)
  [vpool0_vdata] ten Dwi-------  6.00g                       /dev/sdc1(0)
  vpool1         ten dwi-------  6.00g                       vpool1_vdata(0)
  [vpool1_vdata] ten Dwi-------  6.00g                       /dev/sdc1(1536)
[root@hayes-02 ~]# vgs
  VG    #PV #LV #SN Attr   VSize  VFree 
  seven   2   0   0 wz--n- <3.64t <3.64t
  ten     1   4   0 wz--n- <1.82t <1.81t


Version-Release number of selected component (if applicable):
4.18.0-57.el8.x86_64

kernel-4.18.0-57.el8    BUILT: Tue Dec 18 09:30:11 CST 2018
lvm2-2.03.02-2.el8    BUILT: Fri Jan  4 03:49:30 CST 2019
lvm2-libs-2.03.02-2.el8    BUILT: Fri Jan  4 03:49:30 CST 2019
lvm2-dbusd-2.03.02-2.el8    BUILT: Fri Jan  4 03:51:41 CST 2019
lvm2-lockd-2.03.02-2.el8    BUILT: Fri Jan  4 03:49:30 CST 2019
boom-boot-0.9-5.el8    BUILT: Wed Sep 19 16:56:59 CDT 2018
cmirror-2.03.02-2.el8    BUILT: Fri Jan  4 03:49:30 CST 2019
device-mapper-1.02.155-2.el8    BUILT: Fri Jan  4 03:49:30 CST 2019
device-mapper-libs-1.02.155-2.el8    BUILT: Fri Jan  4 03:49:30 CST 2019
device-mapper-event-1.02.155-2.el8    BUILT: Fri Jan  4 03:49:30 CST 2019
device-mapper-event-libs-1.02.155-2.el8    BUILT: Fri Jan  4 03:49:30 CST 2019
device-mapper-persistent-data-0.7.6-1.el8    BUILT: Sun Aug 12 04:21:55 CDT 2018
sanlock-3.6.0-4.el8    BUILT: Thu Oct  4 12:10:37 CDT 2018
sanlock-lib-3.6.0-4.el8    BUILT: Thu Oct  4 12:10:37 CDT 2018
vdo-6.2.0.293-10.el8    BUILT: Fri Dec 14 18:18:47 CST 2018
kmod-kvdo-6.2.0.293-40.el8    BUILT: Wed Dec 19 10:06:09 CST 2018


How reproducible:
Everytime

Comment 1 Corey Marthaler 2019-01-07 21:12:05 UTC
Another data point: A similar scenario w/ multiple thin pools on different PVs passes.

SCENARIO - [split_pv_containing_single_thinpool_lv_on_vg_w_multiple]
Split out a PV containing a single thinpool LV, yet in VG containing multiple
create first thinpool on one pv
lvcreate  --thinpool thinpoolA -L 6G seven /dev/sdd1
create second thinpool on another pv
lvcreate  --thinpool thinpoolB -L 6G seven /dev/sdg1

hayes-01: vgsplit seven ten /dev/sdg1
Deactivating and removing volume groups...

Comment 3 Zdenek Kabelac 2020-10-20 21:40:32 UTC
Split of VG with VDO volumes enabled with this commit:

https://www.redhat.com/archives/lvm-devel/2020-September/msg00149.html

Comment 9 Corey Marthaler 2020-12-09 16:32:52 UTC
Fix verified in the latest rpms.

kernel-4.18.0-259.el8.dt2    BUILT: Mon Dec  7 15:20:12 CST 2020
lvm2-2.03.11-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020
lvm2-libs-2.03.11-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020


[root@hayes-02 ~]# lvcreate  --type vdo -n vdoB -L 6G seven /dev/sdc1
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdoB" created.
[root@hayes-02 ~]# lvcreate  --type vdo -n vdoA -L 6G seven /dev/sdd1
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdoA" created.

[root@hayes-02 ~]# lvs -a -o +devices
  LV             VG    Attr       LSize Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  vdoA           seven vwi-a-v--- 1.99g vpool1        0.00                                    vpool1(0)      
  vdoB           seven vwi-a-v--- 1.99g vpool0        0.00                                    vpool0(0)      
  vpool0         seven dwi------- 6.00g               66.69                                   vpool0_vdata(0)
  [vpool0_vdata] seven Dwi-ao---- 6.00g                                                       /dev/sdc1(0)   
  vpool1         seven dwi------- 6.00g               66.69                                   vpool1_vdata(0)
  [vpool1_vdata] seven Dwi-ao---- 6.00g                                                       /dev/sdd1(0)   

[root@hayes-02 ~]#  dmsetup ls
seven-vpool1-vpool      (253:4)
seven-vpool1_vdata      (253:3)
seven-vdoB      (253:2)
seven-vdoA      (253:5)
seven-vpool0-vpool      (253:1)
seven-vpool0_vdata      (253:0)

[root@hayes-02 ~]# lvchange -an seven
[root@hayes-02 ~]# vgsplit seven ten /dev/sdc1
  New volume group "ten" successfully split from "seven"
[root@hayes-02 ~]# vgsplit seven ten /dev/sdd1
  Existing volume group "ten" successfully split from "seven"

[root@hayes-02 ~]# lvs -a -o +devices,segtype
  LV             VG  Attr       LSize Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices         Type    
  vdoA           ten vwi---v--- 1.99g vpool1                                                vpool1(0)       vdo     
  vdoB           ten vwi---v--- 1.99g vpool0                                                vpool0(0)       vdo     
  vpool0         ten dwi------- 6.00g                                                       vpool0_vdata(0) vdo-pool
  [vpool0_vdata] ten Dwi------- 6.00g                                                       /dev/sdc1(0)    linear  
  vpool1         ten dwi------- 6.00g                                                       vpool1_vdata(0) vdo-pool
  [vpool1_vdata] ten Dwi------- 6.00g                                                       /dev/sdd1(0)    linear

Comment 10 Roman Bednář 2020-12-15 10:17:55 UTC
Verified with 8.4 nightly.

Test result: http://cqe-live.cluster-qe.lab.eng.brq.redhat.com/clusterqe/detail/c4b3fae2-a6eb-4e36-9bea-395083a87f02

kernel-4.18.0-260.el8.x86_64
lvm2-2.03.11-0.3.20201210git9fe7aba.el8.x86_64


# lvcreate  --type vdo -n vdoB -L 6G vg /dev/sdc
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdoB" created.

# lvcreate  --type vdo -n vdoA -L 6G vg /dev/sdd
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdoA" created.

# lvs -a -o +devices
  LV             VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  root           rhel_virt-368 -wi-ao----  <6.20g                                                       /dev/vda2(205)
  swap           rhel_virt-368 -wi-ao---- 820.00m                                                       /dev/vda2(0)
  vdoA           vg            vwi-a-v---   1.99g vpool1        0.00                                    vpool1(0)
  vdoB           vg            vwi-a-v---   1.99g vpool0        0.00                                    vpool0(0)
  vpool0         vg            dwi-------   6.00g               66.69                                   vpool0_vdata(0)
  [vpool0_vdata] vg            Dwi-ao----   6.00g                                                       /dev/sdc(0)
  vpool1         vg            dwi-------   6.00g               66.69                                   vpool1_vdata(0)
  [vpool1_vdata] vg            Dwi-ao----   6.00g                                                       /dev/sdd(0)

# dmsetup ls
rhel_virt--368-swap	(253:1)
rhel_virt--368-root	(253:0)
vg-vpool1-vpool	(253:6)
vg-vpool1_vdata	(253:5)
vg-vdoB	(253:4)
vg-vdoA	(253:7)
vg-vpool0-vpool	(253:3)
vg-vpool0_vdata	(253:2)

# lvchange -an vg

# vgsplit vg vg2 /dev/sdc
  New volume group "vg2" successfully split from "vg"

# vgsplit vg vg2 /dev/sdd
  Existing volume group "vg2" successfully split from "vg"

# lvs -a -o +devices,segtype
  LV             VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices         Type
  root           rhel_virt-368 -wi-ao----  <6.20g                                                       /dev/vda2(205)  linear
  swap           rhel_virt-368 -wi-ao---- 820.00m                                                       /dev/vda2(0)    linear
  vdoA           vg2           vwi---v---   1.99g vpool1                                                vpool1(0)       vdo
  vdoB           vg2           vwi---v---   1.99g vpool0                                                vpool0(0)       vdo
  vpool0         vg2           dwi-------   6.00g                                                       vpool0_vdata(0) vdo-pool
  [vpool0_vdata] vg2           Dwi-------   6.00g                                                       /dev/sdc(0)     linear
  vpool1         vg2           dwi-------   6.00g                                                       vpool1_vdata(0) vdo-pool
  [vpool1_vdata] vg2           Dwi-------   6.00g                                                       /dev/sdd(0)     linear

Comment 12 errata-xmlrpc 2021-05-18 15:01:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1659


Note You need to log in before you can comment on or make changes to this bug.