Bug 1664149
| Summary: | PVs in VGs containing multiple native vdos can not be split even if vdos are on separate devices | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Corey Marthaler <cmarthal> | 
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> | 
| lvm2 sub component: | Command-line tools | QA Contact: | cluster-qe <cluster-qe> | 
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | low | ||
| Priority: | low | CC: | agk, awalsh, heinzm, jbrassow, mcsontos, msnitzer, pasik, prajnoha, rbednar, thornber, zkabelac | 
| Version: | 8.0 | Flags: | pm-rhel:
                mirror+
                 | 
  
| Target Milestone: | rc | ||
| Target Release: | 8.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.03.11-0.2.20201103git8801a86.el8 | Doc Type: | If docs needed, set a value | 
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-05-18 15:01:41 UTC | Type: | Bug | 
| Regression: | --- | Mount Type: | --- | 
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| 
 
        
          Description
        
        
          Corey Marthaler
        
        
        
        
        
          2019-01-07 21:10:29 UTC
        
       
      
      
      
    Another data point: A similar scenario w/ multiple thin pools on different PVs passes. SCENARIO - [split_pv_containing_single_thinpool_lv_on_vg_w_multiple] Split out a PV containing a single thinpool LV, yet in VG containing multiple create first thinpool on one pv lvcreate --thinpool thinpoolA -L 6G seven /dev/sdd1 create second thinpool on another pv lvcreate --thinpool thinpoolB -L 6G seven /dev/sdg1 hayes-01: vgsplit seven ten /dev/sdg1 Deactivating and removing volume groups... Split of VG with VDO volumes enabled with this commit: https://www.redhat.com/archives/lvm-devel/2020-September/msg00149.html Fix verified in the latest rpms.
kernel-4.18.0-259.el8.dt2    BUILT: Mon Dec  7 15:20:12 CST 2020
lvm2-2.03.11-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020
lvm2-libs-2.03.11-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020
[root@hayes-02 ~]# lvcreate  --type vdo -n vdoB -L 6G seven /dev/sdc1
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdoB" created.
[root@hayes-02 ~]# lvcreate  --type vdo -n vdoA -L 6G seven /dev/sdd1
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdoA" created.
[root@hayes-02 ~]# lvs -a -o +devices
  LV             VG    Attr       LSize Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  vdoA           seven vwi-a-v--- 1.99g vpool1        0.00                                    vpool1(0)      
  vdoB           seven vwi-a-v--- 1.99g vpool0        0.00                                    vpool0(0)      
  vpool0         seven dwi------- 6.00g               66.69                                   vpool0_vdata(0)
  [vpool0_vdata] seven Dwi-ao---- 6.00g                                                       /dev/sdc1(0)   
  vpool1         seven dwi------- 6.00g               66.69                                   vpool1_vdata(0)
  [vpool1_vdata] seven Dwi-ao---- 6.00g                                                       /dev/sdd1(0)   
[root@hayes-02 ~]#  dmsetup ls
seven-vpool1-vpool      (253:4)
seven-vpool1_vdata      (253:3)
seven-vdoB      (253:2)
seven-vdoA      (253:5)
seven-vpool0-vpool      (253:1)
seven-vpool0_vdata      (253:0)
[root@hayes-02 ~]# lvchange -an seven
[root@hayes-02 ~]# vgsplit seven ten /dev/sdc1
  New volume group "ten" successfully split from "seven"
[root@hayes-02 ~]# vgsplit seven ten /dev/sdd1
  Existing volume group "ten" successfully split from "seven"
[root@hayes-02 ~]# lvs -a -o +devices,segtype
  LV             VG  Attr       LSize Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices         Type    
  vdoA           ten vwi---v--- 1.99g vpool1                                                vpool1(0)       vdo     
  vdoB           ten vwi---v--- 1.99g vpool0                                                vpool0(0)       vdo     
  vpool0         ten dwi------- 6.00g                                                       vpool0_vdata(0) vdo-pool
  [vpool0_vdata] ten Dwi------- 6.00g                                                       /dev/sdc1(0)    linear  
  vpool1         ten dwi------- 6.00g                                                       vpool1_vdata(0) vdo-pool
  [vpool1_vdata] ten Dwi------- 6.00g                                                       /dev/sdd1(0)    linear
    Verified with 8.4 nightly. Test result: http://cqe-live.cluster-qe.lab.eng.brq.redhat.com/clusterqe/detail/c4b3fae2-a6eb-4e36-9bea-395083a87f02 kernel-4.18.0-260.el8.x86_64 lvm2-2.03.11-0.3.20201210git9fe7aba.el8.x86_64 # lvcreate --type vdo -n vdoB -L 6G vg /dev/sdc Logical blocks defaulted to 523108 blocks. The VDO volume can address 2 GB in 1 data slab. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdoB" created. # lvcreate --type vdo -n vdoA -L 6G vg /dev/sdd Logical blocks defaulted to 523108 blocks. The VDO volume can address 2 GB in 1 data slab. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdoA" created. # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root rhel_virt-368 -wi-ao---- <6.20g /dev/vda2(205) swap rhel_virt-368 -wi-ao---- 820.00m /dev/vda2(0) vdoA vg vwi-a-v--- 1.99g vpool1 0.00 vpool1(0) vdoB vg vwi-a-v--- 1.99g vpool0 0.00 vpool0(0) vpool0 vg dwi------- 6.00g 66.69 vpool0_vdata(0) [vpool0_vdata] vg Dwi-ao---- 6.00g /dev/sdc(0) vpool1 vg dwi------- 6.00g 66.69 vpool1_vdata(0) [vpool1_vdata] vg Dwi-ao---- 6.00g /dev/sdd(0) # dmsetup ls rhel_virt--368-swap (253:1) rhel_virt--368-root (253:0) vg-vpool1-vpool (253:6) vg-vpool1_vdata (253:5) vg-vdoB (253:4) vg-vdoA (253:7) vg-vpool0-vpool (253:3) vg-vpool0_vdata (253:2) # lvchange -an vg # vgsplit vg vg2 /dev/sdc New volume group "vg2" successfully split from "vg" # vgsplit vg vg2 /dev/sdd Existing volume group "vg2" successfully split from "vg" # lvs -a -o +devices,segtype LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Type root rhel_virt-368 -wi-ao---- <6.20g /dev/vda2(205) linear swap rhel_virt-368 -wi-ao---- 820.00m /dev/vda2(0) linear vdoA vg2 vwi---v--- 1.99g vpool1 vpool1(0) vdo vdoB vg2 vwi---v--- 1.99g vpool0 vpool0(0) vdo vpool0 vg2 dwi------- 6.00g vpool0_vdata(0) vdo-pool [vpool0_vdata] vg2 Dwi------- 6.00g /dev/sdc(0) linear vpool1 vg2 dwi------- 6.00g vpool1_vdata(0) vdo-pool [vpool1_vdata] vg2 Dwi------- 6.00g /dev/sdd(0) linear Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1659  |