Red Hat Bugzilla – Bug 1242671
if given a pv, shouldn't 'lvconvert --repair' place the new meta data device on it
Last modified: 2018-04-10 11:17:48 EDT
Description of problem: [root@host-110 ~]# lvs -a -o +devices LV Attr LSize Pool Origin Data% Meta% Devices POOL twi---tz-- 1.00g POOL_tdata(0) POOL_meta0 -wi------- 4.00m /dev/sdc1(0) POOL_meta1 -wi------- 4.00m /dev/sdc1(257) POOL_meta2 -wi------- 4.00m /dev/sdc1(258) POOL_meta3 -wi------- 4.00m /dev/sdc1(259) POOL_meta4 -wi------- 4.00m /dev/sdc1(260) POOL_meta5 -wi------- 4.00m /dev/sdc1(261) [POOL_tdata] Twi------- 1.00g /dev/sdc1(1) [POOL_tmeta] ewi------- 4.00m /dev/sdc1(262) [lvol7_pmspare] ewi------- 4.00m /dev/sdc1(263) origin Vwi---tz-- 1.00g POOL other1 Vwi---tz-- 1.00g POOL other2 Vwi---tz-- 1.00g POOL other3 Vwi---tz-- 1.00g POOL other4 Vwi---tz-- 1.00g POOL other5 Vwi---tz-- 1.00g POOL snap Vwi---tz-k 1.00g POOL origin [root@host-110 ~]# lvconvert --yes --repair snapper_thinp/POOL /dev/sdd1 WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)! For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100. WARNING: If everything works, remove "snapper_thinp/POOL_meta6". WARNING: Use pvmove command to move "snapper_thinp/POOL_tmeta" on the best fitting PV. [root@host-110 ~]# lvs -a -o +devices LV Attr LSize Pool Origin Data% Meta% Devices POOL twi---tz-- 1.00g POOL_tdata(0) POOL_meta0 -wi------- 4.00m /dev/sdc1(0) POOL_meta1 -wi------- 4.00m /dev/sdc1(257) POOL_meta2 -wi------- 4.00m /dev/sdc1(258) POOL_meta3 -wi------- 4.00m /dev/sdc1(259) POOL_meta4 -wi------- 4.00m /dev/sdc1(260) POOL_meta5 -wi------- 4.00m /dev/sdc1(261) POOL_meta6 -wi------- 4.00m /dev/sdc1(262) [POOL_tdata] Twi------- 1.00g /dev/sdc1(1) [POOL_tmeta] ewi------- 4.00m /dev/sdc1(263) [lvol8_pmspare] ewi------- 4.00m /dev/sdc1(264) origin Vwi---tz-- 1.00g POOL other1 Vwi---tz-- 1.00g POOL other2 Vwi---tz-- 1.00g POOL other3 Vwi---tz-- 1.00g POOL other4 Vwi---tz-- 1.00g POOL other5 Vwi---tz-- 1.00g POOL snap Vwi---tz-k 1.00g POOL origin Version-Release number of selected component (if applicable): 3.10.0-290.el7.x86_64 lvm2-2.02.125-2.el7 BUILT: Fri Jul 10 03:42:29 CDT 2015 lvm2-libs-2.02.125-2.el7 BUILT: Fri Jul 10 03:42:29 CDT 2015 lvm2-cluster-2.02.125-2.el7 BUILT: Fri Jul 10 03:42:29 CDT 2015 device-mapper-1.02.102-2.el7 BUILT: Fri Jul 10 03:42:29 CDT 2015 device-mapper-libs-1.02.102-2.el7 BUILT: Fri Jul 10 03:42:29 CDT 2015 device-mapper-event-1.02.102-2.el7 BUILT: Fri Jul 10 03:42:29 CDT 2015 device-mapper-event-libs-1.02.102-2.el7 BUILT: Fri Jul 10 03:42:29 CDT 2015 device-mapper-persistent-data-0.5.3-1.el7 BUILT: Tue Jul 7 08:41:42 CDT 2015 cmirror-2.02.125-2.el7 BUILT: Fri Jul 10 03:42:29 CDT 2015 sanlock-3.2.4-1.el7 BUILT: Fri Jun 19 12:48:49 CDT 2015 sanlock-lib-3.2.4-1.el7 BUILT: Fri Jun 19 12:48:49 CDT 2015 lvm2-lockd-2.02.125-2.el7 BUILT: Fri Jul 10 03:42:29 CDT 2015
This is a bit more sophisticated. With current upstream release of lvm2 2.02.173 'lvconvert --repair' uses existing _pmspare (which normally already exists) for thin_repair execution and just newly allocated _pmspare gets allocated on PV placed on command line. But there is immediately visible problem with existing cmdline - 'lvconvert --repair' does not support '--poolmeetadataspace n' - but that's slightly different issue.
So with recent upstream changes: https://www.redhat.com/archives/lvm-devel/2017-September/msg00025.html User can use '--poolmetadataspare n' to avoid automatic creation of spare device - and in this case devices created for repair will be placed on provided storage area. So for testing this means - create thin-pool without spare and --repair it also without spare.
Based on comment #3, there are two different scenarios here with different expectations for the specified repair device. ## Scenario 1: No pmspare volume (--poolmetadataspare n) Here both the new _pmspare *and* the _tmeta volume will be placed on the speficied --repair device. [root@host-116 ~]# lvcreate --thinpool POOL -L 1G --zero y --poolmetadataspare n snapper_thinp Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. WARNING: recovery of pools without pool metadata spare LV is not automated. Logical volume "POOL" created. [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-a-tz-- 1.00g 0.00 0.98 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.00g /dev/sdg1(0) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdh1(0) [root@host-116 ~]# lvchange -an snapper_thinp/POOL [root@host-116 ~]# lvconvert --yes --repair snapper_thinp/POOL /dev/sdb1 WARNING: Disabling lvmetad cache for repair command. WARNING: Not using lvmetad because of repair. WARNING: LV snapper_thinp/POOL_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required. WARNING: New metadata LV snapper_thinp/POOL_tmeta might use different PVs. Move it with pvmove if required. [root@host-116 ~]# lvchange -ay snapper_thinp/POOL WARNING: Not using lvmetad because a repair command was run. [root@host-116 ~]# lvs -a -o +devices WARNING: Not using lvmetad because a repair command was run. LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-a-tz-- 1.00g 0.00 1.07 POOL_tdata(0) POOL_meta0 snapper_thinp -wi------- 4.00m /dev/sdh1(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.00g /dev/sdg1(0) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0) [lvol1_pmspare] snapper_thinp ewi------- 4.00m /dev/sdb1(1) ## Scenario 2: Pmspare volume (--poolmetadataspare 4M) Here only the new _pmspare volume will be placed on the speficied --repair device, the _tmeta will not. [root@host-116 ~]# lvcreate --thinpool POOL -L 1G --zero y --poolmetadatasize 4M snapper_thinp Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. Logical volume "POOL" created. [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-a-tz-- 1.00g 0.00 0.98 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.00g /dev/sdg1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdh1(0) [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdg1(0) [root@host-116 ~]# lvchange -an snapper_thinp/POOL [root@host-116 ~]# lvconvert --yes --repair snapper_thinp/POOL /dev/sdb1 WARNING: Disabling lvmetad cache for repair command. WARNING: Not using lvmetad because of repair. WARNING: LV snapper_thinp/POOL_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required. WARNING: New metadata LV snapper_thinp/POOL_tmeta might use different PVs. Move it with pvmove if required. [root@host-116 ~]# lvchange -ay snapper_thinp/POOL WARNING: Not using lvmetad because a repair command was run. [root@host-116 ~]# lvs -a -o +devices WARNING: Not using lvmetad because a repair command was run. LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-a-tz-- 1.00g 0.00 1.07 POOL_tdata(0) POOL_meta0 snapper_thinp -wi------- 4.00m /dev/sdh1(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.00g /dev/sdg1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdg1(0) [lvol1_pmspare] snapper_thinp ewi------- 4.00m /dev/sdb1(0) 3.10.0-772.el7.x86_64 lvm2-2.02.176-3.el7 BUILT: Fri Nov 10 07:12:10 CST 2017 lvm2-libs-2.02.176-3.el7 BUILT: Fri Nov 10 07:12:10 CST 2017 lvm2-cluster-2.02.176-3.el7 BUILT: Fri Nov 10 07:12:10 CST 2017 lvm2-lockd-2.02.176-3.el7 BUILT: Fri Nov 10 07:12:10 CST 2017 lvm2-python-boom-0.8-3.el7 BUILT: Fri Nov 10 07:16:45 CST 2017 cmirror-2.02.176-3.el7 BUILT: Fri Nov 10 07:12:10 CST 2017 device-mapper-1.02.145-3.el7 BUILT: Fri Nov 10 07:12:10 CST 2017 device-mapper-libs-1.02.145-3.el7 BUILT: Fri Nov 10 07:12:10 CST 2017 device-mapper-event-1.02.145-3.el7 BUILT: Fri Nov 10 07:12:10 CST 2017 device-mapper-event-libs-1.02.145-3.el7 BUILT: Fri Nov 10 07:12:10 CST 2017 device-mapper-persistent-data-0.7.3-2.el7 BUILT: Tue Oct 10 04:00:07 CDT 2017
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0853