Bug 1468052
Summary: | unable to convert thin meta|data volume residing on a shared VG | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
Component: | lvm2 | Assignee: | David Teigland <teigland> |
lvm2 sub component: | LVM lock daemon / lvmlockd | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | low | ||
Priority: | unspecified | CC: | agk, cluster-qe, heinzm, jbrassow, mcsontos, prajnoha, rbednar, rink, teigland, zkabelac |
Version: | 7.4 | ||
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.176-5.el7 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1296312 | Environment: | |
Last Closed: | 2018-04-10 15:20:44 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1296312 | ||
Bug Blocks: |
Comment 2
Corey Marthaler
2017-07-05 23:17:45 UTC
fixed here https://sourceware.org/git/?p=lvm2.git;a=commit;h=a0f6135e5c53af8bb361932ebe8fec55a1804cc9 This is a different issue than the one fixed in comment 3, and it appears that this already works. # lvs gg LV VG Attr LSize cpool0 gg Cwi---C--- 12.00m tpool0 gg twi---tz-- 12.00m # lvconvert --type cache --cachepool gg/cpool0 gg/tpool0 Do you want wipe existing metadata of cache pool gg/cpool0? [y/n]: y WARNING: Cached thin pool's data cannot be currently resized and require manual uncache before resize! Logical volume gg/tpool0_tdata is now cached. # lvs -a gg LV VG Attr LSize Pool Origin [cpool0] gg Cwi---C--- 12.00m [cpool0_cdata] gg Cwi------- 12.00m [cpool0_cmeta] gg ewi------- 8.00m [lvmlock] gg -wi-ao---- 256.00m [lvol4_pmspare] gg ewi------- 8.00m tpool0 gg twi---tz-- 12.00m [tpool0_tdata] gg Cwi---C--- 12.00m [cpool0] [tpool0_tdata_corig] [tpool0_tdata_corig] gg owi---C--- 12.00m [tpool0_tmeta] gg ewi------- 4.00m Looks like all cases/scenarios listed in this bug pass now with the exception of the _meta volumes. 3.10.0-755.el7.x86_64 lvm2-2.02.175-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017 lvm2-libs-2.02.175-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017 lvm2-cluster-2.02.175-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017 device-mapper-1.02.144-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017 device-mapper-libs-1.02.144-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017 device-mapper-event-1.02.144-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017 device-mapper-event-libs-1.02.144-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017 device-mapper-persistent-data-0.7.0-0.1.rc6.el7 BUILT: Mon Mar 27 10:15:46 CDT 2017 cmirror-2.02.175-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017 sanlock-3.5.0-1.el7 BUILT: Wed Apr 26 09:37:30 CDT 2017 sanlock-lib-3.5.0-1.el7 BUILT: Wed Apr 26 09:37:30 CDT 2017 lvm2-lockd-2.02.175-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017 [root@host-040 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- 4.00g 0.00 1.86 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 4.00g /dev/sdf1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0) [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdf1(0) origin snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other1 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other2 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other3 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other4 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other5 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 pool_convert snapper_thinp Vwi-a-t--- 1.00g POOL origin 0.00 [root@host-040 ~]# lvconvert --yes --type mirror -m 1 snapper_thinp/POOL Operation not permitted on LV snapper_thinp/POOL type thinpool. [root@host-040 ~]# lvconvert --yes --type mirror -m 1 snapper_thinp/POOL_tdata Mirror segment type cannot be used for thinpools. Try "raid1" segment type instead. [root@host-040 ~]# lvconvert --yes --type raid1 -m 1 snapper_thinp/POOL_tdata Logical volume snapper_thinp/POOL_tdata successfully converted. [root@host-040 ~]# lvconvert --yes --type mirror -m 1 snapper_thinp/POOL_tmeta Lock on incorrect thin lv type snapper_thinp/POOL_tmeta [root@host-040 ~]# lvconvert --yes --type raid1 -m 1 snapper_thinp/POOL_tmeta Lock on incorrect thin lv type snapper_thinp/POOL_tmeta fixed here https://sourceware.org/git/?p=lvm2.git;a=commit;h=b910c34f09f45987fe56f0e90455a166e047144c # lvconvert --type raid1 -m 1 cc/pool0_tmeta Are you sure you want to convert linear LV cc/pool0_tmeta to raid1 with 2 images enhancing resilience? [y/n]: n Logical volume cc/pool0_tmeta NOT converted. All the checks appear to pass now w/o any "Lock on incorrect" errors. Marking verified in the latest rpms. 3.10.0-826.el7.x86_64 lvm2-2.02.176-5.el7 BUILT: Wed Dec 6 04:13:07 CST 2017 lvm2-libs-2.02.176-5.el7 BUILT: Wed Dec 6 04:13:07 CST 2017 lvm2-cluster-2.02.176-5.el7 BUILT: Wed Dec 6 04:13:07 CST 2017 lvm2-lockd-2.02.176-5.el7 BUILT: Wed Dec 6 04:13:07 CST 2017 lvm2-python-boom-0.8.1-5.el7 BUILT: Wed Dec 6 04:15:40 CST 2017 cmirror-2.02.176-5.el7 BUILT: Wed Dec 6 04:13:07 CST 2017 device-mapper-1.02.145-5.el7 BUILT: Wed Dec 6 04:13:07 CST 2017 device-mapper-libs-1.02.145-5.el7 BUILT: Wed Dec 6 04:13:07 CST 2017 device-mapper-event-1.02.145-5.el7 BUILT: Wed Dec 6 04:13:07 CST 2017 device-mapper-event-libs-1.02.145-5.el7 BUILT: Wed Dec 6 04:13:07 CST 2017 device-mapper-persistent-data-0.7.3-3.el7 BUILT: Tue Nov 14 05:07:18 CST 2017 sanlock-3.6.0-1.el7 BUILT: Tue Dec 5 11:47:21 CST 2017 sanlock-lib-3.6.0-1.el7 BUILT: Tue Dec 5 11:47:21 CST 2017 vdo-6.1.0.106-13 BUILT: Thu Dec 21 09:00:07 CST 2017 kmod-kvdo-6.1.0.106-11.el7 BUILT: Thu Dec 21 10:09:12 CST 2017 SCENARIO - [attempt_thinpool_to_cache_conversion] Create thin pool volumes and attempt to use them in cache pool/origin conversion lvcreate --activate ey -n data_linear -L 500M cache_sanity /dev/mapper/mpathf1 lvcreate --activate ey -n meta_linear -L 500M cache_sanity /dev/mapper/mpathb1 lvcreate --activate ey --thinpool origin_thin -L 500M --poolmetadatasize 100M cache_sanity /dev/mapper/mpathf1 lvcreate --activate ey --thinpool data_thin -L 500M --poolmetadatasize 100M cache_sanity /dev/mapper/mpathb1 lvcreate --activate ey --thinpool meta_thin -L 500M --poolmetadatasize 100M cache_sanity /dev/mapper/mpathd1 Check 1: Attempt to create cache pool volume by combining data_thin (data) and meta_thin (meta) volume lvconvert --yes --type cache-pool --poolmetadata cache_sanity/meta_thin cache_sanity/data_thin Command on LV cache_sanity/data_thin does not accept LV type thinpool. Command not permitted on LV cache_sanity/data_thin. Check 2: Attempt to create cache pool volume by combining data_thin (data) and meta_linear (meta) volume lvconvert --yes --type cache-pool --poolmetadata cache_sanity/meta_linear cache_sanity/data_thin Command on LV cache_sanity/data_thin does not accept LV type thinpool. Command not permitted on LV cache_sanity/data_thin. Check 3: Attempt to create cache pool volume by combining data_linear (data) and meta_thin (meta) volume lvconvert --yes --type cache-pool --poolmetadata cache_sanity/meta_thin cache_sanity/data_linear Pool metadata LV cache_sanity/meta_thin is of an unsupported type. Check 4: Attempt to create cache pool volume by combining data_linear (data) and meta_linear (meta) volume lvconvert --yes --type cache-pool --poolmetadata cache_sanity/meta_linear cache_sanity/data_linear WARNING: Converting cache_sanity/data_linear and cache_sanity/meta_linear to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Create cached volume by combining the cache pool (fast) and thin pool origin (slow) volumes lvconvert --yes --type cache --cachepool cache_sanity/data_linear cache_sanity/origin_thin WARNING: Cached thin pool's data cannot be currently resized and require manual uncache before resize! SCENARIO - [pool_to_conversion] Attempt to convert a thinp pool device to mirror, raid1, raid4, raid5, and raid6 Making pool volume lvcreate --activate ey --thinpool POOL -L 4G --zero y --poolmetadatasize 4M snapper_thinp Skipping meta check until supported with shared storage (bug 1265768) Making origin volume lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n origin lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n other1 lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n other2 lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n other3 lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other4 WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (4.00 GiB). lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n other5 WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (4.00 GiB). Making snapshot of origin volume lvcreate --activate ey -y -k n -s /dev/snapper_thinp/origin -n pool_convert Attempt mirror conversion of pool device... lvconvert --yes --type mirror -m 1 snapper_thinp/POOL Operation not permitted on LV snapper_thinp/POOL type thinpool. Attempt raid1 conversion of pool device... lvconvert --yes --type raid1 -m 1 snapper_thinp/POOL Operation not permitted on LV snapper_thinp/POOL type thinpool. Attempt mirror segtype conversion of POOL_tdata device... lvconvert --yes --type mirror -m 1 snapper_thinp/POOL_tdata Mirror segment type cannot be used for thinpools. Try "raid1" segment type instead. Attempt raid1 segtype conversion of POOL_tdata device... lvconvert --yes --type raid1 -m 1 snapper_thinp/POOL_tdata Waiting until all mirror|raid volumes become fully syncd... 0/1 mirror(s) are fully synced: ( 24.15% ) 0/1 mirror(s) are fully synced: ( 53.79% ) 0/1 mirror(s) are fully synced: ( 93.57% ) 1/1 mirror(s) are fully synced: ( 100.00% ) Sleeping 15 sec Regression check for 1347048: verify raid type is set correctly for converted snapper_thinp/POOL_tdata device. LV segtype is correctly set to raid for POOL_tdata device. Device mapper table check for raid target passed for device POOL_tdata. Attempt raid1 segtype conversion of POOL_tdata device... lvconvert --yes --type raid1 -m 0 snapper_thinp/POOL_tdata Attempt mirror segtype conversion of POOL_tmeta device... lvconvert --yes --type mirror -m 1 snapper_thinp/POOL_tmeta Mirror segment type cannot be used for thinpool metadata. Try "raid1" segment type instead. Attempt raid1 segtype conversion of POOL_tmeta device... lvconvert --yes --type raid1 -m 1 snapper_thinp/POOL_tmeta Waiting until all mirror|raid volumes become fully syncd... 1/1 mirror(s) are fully synced: ( 100.00% ) Sleeping 15 sec Attempt raid1 segtype conversion of POOL_tmeta device... lvconvert --yes --type raid1 -m 0 snapper_thinp/POOL_tmeta Attempt raid4 conversion of pool device... lvconvert --type raid4 -i 2 snapper_thinp/POOL Operation not permitted on LV snapper_thinp/POOL type thinpool. Attempt raid5 conversion of pool device... lvconvert --type raid5 -i 2 snapper_thinp/POOL Operation not permitted on LV snapper_thinp/POOL type thinpool. Attempt raid6 conversion of pool device... lvconvert --type raid6 -i 3 snapper_thinp/POOL Operation not permitted on LV snapper_thinp/POOL type thinpool. Attempt raid0 conversion of pool device... lvconvert --type raid0 -i 3 snapper_thinp/POOL Operation not permitted on LV snapper_thinp/POOL type thinpool. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0853 |