This was disabled because the activation tree code is not yet sophisticated to handle this sort of device stack. +++ This bug was initially created as a clone of Bug #1365286 +++ Description of problem: # Down conversion works fine w/o virt LVs [root@host-078 ~]# lvcreate --thinpool POOL -L 4G --zero n --poolmetadatasize 4M test Logical volume "POOL" created. [root@host-078 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL test twi-a-t--- 4.00g 0.00 1.27 POOL_tdata(0) [POOL_tdata] test Twi-ao---- 4.00g /dev/sda1(1) [POOL_tmeta] test ewi-ao---- 4.00m /dev/sdh1(0) [lvol0_pmspare] test ewi------- 4.00m /dev/sda1(0) [root@host-078 ~]# lvconvert --type raid1 -m 1 test/POOL_tdata [root@host-078 ~]# lvconvert --type raid1 -m 0 test/POOL_tdata [root@host-078 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL test twi-a-t--- 4.00g 0.00 1.27 POOL_tdata(0) [POOL_tdata] test Twi-ao---- 4.00g /dev/sda1(1) [POOL_tmeta] test ewi-ao---- 4.00m /dev/sdh1(0) [lvol0_pmspare] test ewi------- 4.00m /dev/sda1(0) # This time create virt LVs [root@host-078 ~]# lvcreate --thinpool POOL -L 4G --zero n --poolmetadatasize 4M test Logical volume "POOL" created. [root@host-078 ~]# lvcreate --virtualsize 1G -T test/POOL -n origin Logical volume "origin" created. [root@host-078 ~]# lvcreate -k n -s /dev/test/origin -n pool_convert Logical volume "pool_convert" created. [root@host-078 ~]# lvconvert --type raid1 -m 1 test/POOL_tdata [root@host-078 ~]# lvconvert --type raid1 -m 0 test/POOL_tdata Internal error: Performing unsafe table load while 1 device(s) are known to be suspended: (253:8) Internal error: Performing unsafe table load while 1 device(s) are known to be suspended: (253:9) Internal error: Performing unsafe table load while 1 device(s) are known to be suspended: (253:10) Internal error: Performing unsafe table load while 1 device(s) are known to be suspended: (253:11) [root@host-078 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL test twi-aot--- 4.00g 0.00 1.37 POOL_tdata(0) [POOL_tdata] test Twi-ao---- 4.00g /dev/sda1(1) [POOL_tmeta] test ewi-ao---- 4.00m /dev/sdh1(0) [lvol0_pmspare] test ewi------- 4.00m /dev/sda1(0) origin test Vwi-a-t--- 1.00g POOL 0.01 pool_convert test Vwi-a-t--- 1.00g POOL origin 0.01 # Same thing w/ RAID10 [root@host-078 ~]# lvconvert --type raid10 -m 1 test/POOL_tdata [root@host-078 ~]# lvconvert -m 0 test/POOL_tdata Internal error: Performing unsafe table load while 1 device(s) are known to be suspended: (253:8) Internal error: Performing unsafe table load while 1 device(s) are known to be suspended: (253:9) Internal error: Performing unsafe table load while 1 device(s) are known to be suspended: (253:10) Internal error: Performing unsafe table load while 1 device(s) are known to be suspended: (253:11) Version-Release number of selected component (if applicable): 3.10.0-480.el7.x86_64 lvm2-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 lvm2-libs-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 lvm2-cluster-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-libs-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-event-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-event-libs-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 cmirror-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 sanlock-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 sanlock-lib-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 lvm2-lockd-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 How reproducible: Everytime --- Additional comment from Alasdair Kergon on 2016-09-20 16:03:58 BST --- Also getting 'Number of segments in active LV vg99/pool_tdata does not match metadata.' --- Additional comment from Alasdair Kergon on 2016-09-20 16:12:16 BST --- The second (snapshot) LV is unnecessary. --- Additional comment from Alasdair Kergon on 2016-09-22 14:43:20 BST --- The primary cause is that: lvconvert --type raid10 -m 1 test/POOL_tdata is not actually completing the conversion and leaves the on-disk metadata inconsistent with what's live in the kernel. A further lvchange --refresh is required to make it work. Additionally, if LVs are inactive, we see messages such as: Unable to determine sync status of vg99/lvol2. and the code proceeds regardless. --- Additional comment from Alasdair Kergon on 2016-09-27 16:04:14 BST --- We are missing some code in _add_lv_to_dtree to make sure that the underlying raid devices get added to the dtree when they are present in the metadata but not in the kernel. (It walks through and skips them.)
Fixed likely in 2018 during 2.03 shift - and 2.02.182.