Bug 2064838

Summary: is the (z) zero bit relevant to vdo devices, and if so, why is it not getting set?
Product: Red Hat Enterprise Linux 9 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: VDO QA Contact: cluster-qe <cluster-qe>
Status: ASSIGNED --- Docs Contact:
Severity: low    
Priority: unspecified CC: agk, awalsh, heinzm, jbrassow, prajnoha, zkabelac
Version: 9.0   
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2022-03-16 17:48:00 UTC
Description of problem:
       The lv_attr bits are:
       8  Newly-allocated data blocks are overwritten with blocks of (z)eroes before use.


# CACHE
[root@hayes-01 ~]# lvcreate --yes --type cache-pool -n my_cachepool1 --zero y -L 2G  test
  Option --zero is unsupported with cache pools.
  Run `lvcreate --help' for more information.


# THINP (notice the z bit gets set)
[root@hayes-01 ~]# lvcreate --yes --type thin-pool -n my_thinpool1 --zero y -L 2G test /dev/sdg1 
  Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data.
  Logical volume "my_thinpool1" created.
[root@hayes-01 ~]# lvcreate --yes -n my_virt1 -V 10G test/my_thinpool1
  WARNING: Sum of all thin volume sizes (10.00 GiB) exceeds the size of thin pool test/my_thinpool1 (2.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "my_virt1" created.
[root@hayes-01 ~]# lvs -a -o +devices
  LV                   VG   Attr       LSize  Pool         Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices              
  [lvol0_pmspare]      test ewi-------  4.00m                                                             /dev/sdg1(0)         
  my_thinpool1         test twi-aotz--  2.00g                     0.00   11.13                            my_thinpool1_tdata(0)
  [my_thinpool1_tdata] test Twi-ao----  2.00g                                                             /dev/sdg1(1)         
  [my_thinpool1_tmeta] test ewi-ao----  4.00m                                                             /dev/sdg1(513)       
  my_virt1             test Vwi-a-tz-- 10.00g my_thinpool1        0.00                                                         


# VDO - 2 cmds (notice the z bit does NOT get set) 
[root@hayes-01 ~]# lvcreate --yes --type linear -n vdo_pool --zero y -L 25G test
  Wiping vdo signature on /dev/test/vdo_pool.
  Logical volume "vdo_pool" created.
[root@hayes-01 ~]# lvconvert --yes --type vdo-pool -n vdo_lv -V 2T test/vdo_pool
  WARNING: Converting logical volume test/vdo_pool to VDO pool volume with formating.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
    The VDO volume can address 22 GB in 11 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted test/vdo_pool to VDO pool volume and created virtual test/vdo_lv VDO volume.
[root@hayes-01 ~]# lvs -a -o +devices
  LV               VG   Attr       LSize  Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices          
  vdo_lv           test vwi-a-v---  2.00t vdo_pool        0.00                                    vdo_pool(0)      
  vdo_pool         test dwi------- 25.00g                 12.06                                   vdo_pool_vdata(0)
  [vdo_pool_vdata] test Dwi-ao---- 25.00g                                                         /dev/sdc1(0)     



# VDO - 1 cmd (notice the z bit does NOT get set)
[root@hayes-01 ~]# lvcreate --yes --type vdo -n vdo_lv --zero y -L 25G -V 2T test 
  Wiping vdo signature on /dev/test/vpool0.
    The VDO volume can address 22 GB in 11 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
[root@hayes-01 ~]# lvs -a -o +devices
  LV             VG   Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  vdo_lv         test vwi-a-v---  2.00t vpool0        0.00                                    vpool0(0)      
  vpool0         test dwi------- 25.00g               12.06                                   vpool0_vdata(0)
  [vpool0_vdata] test Dwi-ao---- 25.00g                                                       /dev/sdc1(0)   


Version-Release number of selected component (if applicable):
kernel-5.14.0-70.el9    BUILT: Thu Feb 24 05:48:54 PM CST 2022
lvm2-2.03.14-4.el9    BUILT: Wed Feb 16 06:01:21 AM CST 2022
lvm2-libs-2.03.14-4.el9    BUILT: Wed Feb 16 06:01:21 AM CST 2022

vdo-8.1.1.360-1.el9    BUILT: Sat Feb 12 11:34:09 PM CST 2022
kmod-kvdo-8.1.1.360-15.el9    BUILT: Mon Feb 28 12:06:18 PM CST 2022

Comment 1 Corey Marthaler 2022-03-17 01:46:46 UTC
While we're at it, the "zero" field should also be set if relevant.

[root@hayes-01 ~]# lvs -a -o +devices,zero
  LV               VG         Attr       LSize  Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices           Zero   
  vdo_lv           vdo_sanity vwi-a-v---  2.00t vdo_pool        0.00                                    vdo_pool(0)       unknown
  vdo_pool         vdo_sanity dwi------- 25.00g                 12.06                                   vdo_pool_vdata(0) unknown
  [vdo_pool_vdata] vdo_sanity Dwi-ao---- 25.00g                                                         /dev/sdc1(0)      unknown

Comment 2 Corey Marthaler 2022-03-17 02:16:42 UTC
[root@hayes-01 ~]# lvchange --zero y vdo_sanity/vdo_lv
  Command on LV vdo_sanity/vdo_lv uses options that require LV types thinpool .
  Command not permitted on LV vdo_sanity/vdo_lv.
[root@hayes-01 ~]# lvchange --zero y vdo_sanity/vdo_pool
  Command on LV vdo_sanity/vdo_pool uses options that require LV types thinpool .
  Command not permitted on LV vdo_sanity/vdo_pool.

Comment 3 Zdenek Kabelac 2022-03-17 10:23:42 UTC
ATM VDO  target always operates with 4K blocks - that are always zeroed for 'partial writes'

So there is no option to enable/disable this logic.

So there could be 2 views -  we don't provide any option to change this logic - thus there is no meaning to report anything about it - or we may signalize this feature is always on.

However if we would automatically print now with each vdopool as being zeroed - it might be possibly confusing users as it's changing reported info from older version.

So it's more about whether we want to have such change - and whether  VDO itself has any plans to change this in future to make this meaningful.