Hide Forgot
Description of problem: Does it make sense for raid (or other volumes) created w/ a zero flag intended to be converted to thin pools to retain their zero flag attr? Or does that flag need to be present on the actual lvconvert cmdline since this flag only applies to pool volumes? Feel free to close if this is expected behavior. ### zero flag at raid create, but *not* at conversion to pool volume [root@host-076 ~]# lvcreate --type raid1 -m 1 --zero n -L 4M -n meta snapper_thinp WARNING: Logical volume snapper_thinp/meta not zeroed. Logical volume "meta" created. [root@host-076 ~]# lvcreate --type raid1 -m 1 --zero n -L 1G -n POOL snapper_thinp WARNING: Logical volume snapper_thinp/POOL not zeroed. Logical volume "POOL" created. [root@host-076 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices POOL snapper_thinp rwi-a-r--- 1.00g 36.72 POOL_rimage_0(0),POOL_rimage_1(0) [POOL_rimage_0] snapper_thinp Iwi-aor--- 1.00g /dev/sdd1(3) [POOL_rimage_1] snapper_thinp Iwi-aor--- 1.00g /dev/sdc1(3) [POOL_rmeta_0] snapper_thinp ewi-aor--- 4.00m /dev/sdd1(2) [POOL_rmeta_1] snapper_thinp ewi-aor--- 4.00m /dev/sdc1(2) meta snapper_thinp rwi-a-r--- 4.00m 100.00 meta_rimage_0(0),meta_rimage_1(0) [meta_rimage_0] snapper_thinp iwi-aor--- 4.00m /dev/sdd1(1) [meta_rimage_1] snapper_thinp iwi-aor--- 4.00m /dev/sdc1(1) [meta_rmeta_0] snapper_thinp ewi-aor--- 4.00m /dev/sdd1(0) [meta_rmeta_1] snapper_thinp ewi-aor--- 4.00m /dev/sdc1(0) [root@host-076 ~]# lvs --noheadings -o zero --select 'lvname=POOL' unknown ### No zero flag with the convert [root@host-076 ~]# lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta --yes WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted snapper_thinp/POOL to thin pool. [root@host-076 ~]# lvs --noheadings -o zero --select 'lvname=POOL' zero [root@host-076 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL snapper_thinp twi-a-tz-- 1.00g 0.00 0.98 POOL_tdata(0) [POOL_tdata] snapper_thinp rwi-aor--- 1.00g 100.00 POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0) [POOL_tdata_rimage_0] snapper_thinp iwi-aor--- 1.00g /dev/sdd1(3) [POOL_tdata_rimage_1] snapper_thinp iwi-aor--- 1.00g /dev/sdc1(3) [POOL_tdata_rmeta_0] snapper_thinp ewi-aor--- 4.00m /dev/sdd1(2) [POOL_tdata_rmeta_1] snapper_thinp ewi-aor--- 4.00m /dev/sdc1(2) [POOL_tmeta] snapper_thinp ewi-aor--- 4.00m 100.00 POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0) [POOL_tmeta_rimage_0] snapper_thinp iwi-aor--- 4.00m /dev/sdd1(1) [POOL_tmeta_rimage_1] snapper_thinp iwi-aor--- 4.00m /dev/sdc1(1) [POOL_tmeta_rmeta_0] snapper_thinp ewi-aor--- 4.00m /dev/sdd1(0) [POOL_tmeta_rmeta_1] snapper_thinp ewi-aor--- 4.00m /dev/sdc1(0) [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdd1(259) ### zero flag at raid create, *and* at conversion to pool volume [root@host-076 ~]# lvcreate --type raid1 -m 1 --zero n -L 4M -n meta snapper_thinp WARNING: Logical volume snapper_thinp/meta not zeroed. Logical volume "meta" created. [root@host-076 ~]# lvcreate --type raid1 -m 1 --zero n -L 1G -n POOL snapper_thinp WARNING: Logical volume snapper_thinp/POOL not zeroed. Logical volume "POOL" created. [root@host-076 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices POOL snapper_thinp rwi-a-r--- 1.00g 43.75 POOL_rimage_0(0),POOL_rimage_1(0) [POOL_rimage_0] snapper_thinp Iwi-aor--- 1.00g /dev/sdd1(3) [POOL_rimage_1] snapper_thinp Iwi-aor--- 1.00g /dev/sdc1(3) [POOL_rmeta_0] snapper_thinp ewi-aor--- 4.00m /dev/sdd1(2) [POOL_rmeta_1] snapper_thinp ewi-aor--- 4.00m /dev/sdc1(2) meta snapper_thinp rwi-a-r--- 4.00m 100.00 meta_rimage_0(0),meta_rimage_1(0) [meta_rimage_0] snapper_thinp iwi-aor--- 4.00m /dev/sdd1(1) [meta_rimage_1] snapper_thinp iwi-aor--- 4.00m /dev/sdc1(1) [meta_rmeta_0] snapper_thinp ewi-aor--- 4.00m /dev/sdd1(0) [meta_rmeta_1] snapper_thinp ewi-aor--- 4.00m /dev/sdc1(0) [root@host-076 ~]# lvs --noheadings -o zero --select 'lvname=POOL' unknown [root@host-076 ~]# dmsetup table snapper_thinp-meta_rmeta_0: 0 8192 linear 8:49 2048 snapper_thinp-POOL: 0 2097152 raid raid1 3 0 region_size 1024 2 253:7 253:8 253:9 253:10 snapper_thinp-meta: 0 8192 raid raid1 3 0 region_size 1024 2 253:2 253:3 253:4 253:5 snapper_thinp-POOL_rmeta_1: 0 8192 linear 8:33 18432 snapper_thinp-POOL_rimage_1: 0 2097152 linear 8:33 26624 snapper_thinp-POOL_rmeta_0: 0 8192 linear 8:49 18432 vg_host076-lv_swap: 0 1671168 linear 252:2 14075904 snapper_thinp-POOL_rimage_0: 0 2097152 linear 8:49 26624 vg_host076-lv_root: 0 14073856 linear 252:2 2048 snapper_thinp-meta_rimage_1: 0 8192 linear 8:33 10240 snapper_thinp-meta_rimage_0: 0 8192 linear 8:49 10240 snapper_thinp-meta_rmeta_1: 0 8192 linear 8:33 2048 ### Zero flag with the convert [root@host-076 ~]# lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta --yes --zero n WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted snapper_thinp/POOL to thin pool. [root@host-076 ~]# lvs --noheadings -o zero --select 'lvname=POOL' [root@host-076 ~]# dmsetup table snapper_thinp-POOL_tdata_rimage_1: 0 2097152 linear 8:33 26624 snapper_thinp-POOL_tdata_rimage_0: 0 2097152 linear 8:49 26624 snapper_thinp-POOL_tmeta_rimage_1: 0 8192 linear 8:33 10240 snapper_thinp-POOL_tmeta_rimage_0: 0 8192 linear 8:49 10240 snapper_thinp-POOL: 0 2097152 thin-pool 253:6 253:11 128 0 1 skip_block_zeroing snapper_thinp-POOL_tdata_rmeta_1: 0 8192 linear 8:33 18432 snapper_thinp-POOL_tdata_rmeta_0: 0 8192 linear 8:49 18432 snapper_thinp-POOL_tdata: 0 2097152 raid raid1 3 0 region_size 1024 2 253:7 253:8 253:9 253:10 snapper_thinp-POOL_tmeta: 0 8192 raid raid1 3 0 region_size 1024 2 253:2 253:3 253:4 253:5 snapper_thinp-POOL_tmeta_rmeta_1: 0 8192 linear 8:33 2048 snapper_thinp-POOL_tmeta_rmeta_0: 0 8192 linear 8:49 2048 [root@host-076 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL snapper_thinp twi-a-t--- 1.00g 0.00 0.98 POOL_tdata(0) [POOL_tdata] snapper_thinp rwi-aor--- 1.00g 100.00 POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0) [POOL_tdata_rimage_0] snapper_thinp iwi-aor--- 1.00g /dev/sdd1(3) [POOL_tdata_rimage_1] snapper_thinp iwi-aor--- 1.00g /dev/sdc1(3) [POOL_tdata_rmeta_0] snapper_thinp ewi-aor--- 4.00m /dev/sdd1(2) [POOL_tdata_rmeta_1] snapper_thinp ewi-aor--- 4.00m /dev/sdc1(2) [POOL_tmeta] snapper_thinp ewi-aor--- 4.00m 100.00 POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0) [POOL_tmeta_rimage_0] snapper_thinp iwi-aor--- 4.00m /dev/sdd1(1) [POOL_tmeta_rimage_1] snapper_thinp iwi-aor--- 4.00m /dev/sdc1(1) [POOL_tmeta_rmeta_0] snapper_thinp ewi-aor--- 4.00m /dev/sdd1(0) [POOL_tmeta_rmeta_1] snapper_thinp ewi-aor--- 4.00m /dev/sdc1(0) [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdd1(259) Version-Release number of selected component (if applicable): 2.6.32-676.el6.x86_64 lvm2-2.02.143-10.el6 BUILT: Thu Nov 24 03:58:43 CST 2016 lvm2-libs-2.02.143-10.el6 BUILT: Thu Nov 24 03:58:43 CST 2016 lvm2-cluster-2.02.143-10.el6 BUILT: Thu Nov 24 03:58:43 CST 2016 udev-147-2.73.el6_8.2 BUILT: Tue Aug 30 08:17:19 CDT 2016 device-mapper-1.02.117-10.el6 BUILT: Thu Nov 24 03:58:43 CST 2016 device-mapper-libs-1.02.117-10.el6 BUILT: Thu Nov 24 03:58:43 CST 2016 device-mapper-event-1.02.117-10.el6 BUILT: Thu Nov 24 03:58:43 CST 2016 device-mapper-event-libs-1.02.117-10.el6 BUILT: Thu Nov 24 03:58:43 CST 2016
There are 2 somewhat different meanings. Basically --zero is an overloaded option. For 'standard' LV (linear,stripe,mirror,raid) it defines what happen with newly created LV - it's zero or not. In this case you generally always want to zero to prevent some unwanted signature detection and related troubles. For thin-pool it defines behaviour for provisioned chunks (first usage). So while in 1st. case this is (ATM) not stored in metadata at all and only made during initial activation (and technically lost when you power-off and reboot machine in the right time) in the thin-pool case it's persistent feature. So while flag means something similar - it's quite different - and probably today I'd pick a new option for thin-pool case, but anyway.... Hopefully this explains why we can't use --zero from raid for anything on thin-pools.
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available. The official life cycle policy can be reviewed here: http://redhat.com/rhel/lifecycle This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL: https://access.redhat.com/