Bug 2070777
| Summary: | Allow certain VDO volume properties to be changed after creation | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | bjohnsto | |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> | |
| lvm2 sub component: | VDO | QA Contact: | cluster-qe <cluster-qe> | |
| Status: | CLOSED ERRATA | Docs Contact: | ||
| Severity: | medium | |||
| Priority: | unspecified | CC: | agk, awalsh, cmarthal, heinzm, jbrassow, mcsontos, prajnoha, zkabelac | |
| Version: | unspecified | Keywords: | Triaged | |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
|
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | lvm2-2.03.21-1.el9 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2100608 (view as bug list) | Environment: | ||
| Last Closed: | 2023-11-07 08:53:27 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2100608 | |||
|
Description
bjohnsto
2022-03-31 22:13:25 UTC
With lvchange we should support these modifiable for already existing vdo pool: vdo_max_discard vdo_block_map_period vdo_block_map_cache_size_mb vdo_ack_threads vdo_bio_threads vdo_bio_rotation vdo_cpu_threads vdo_hash_zone_threads vdo_logical_threads vdo_physical_threads To handle this in some standard lvm2 way - we will introduce support for --vdosettings options (just like with --cachesettings) I believe some (all?) of these settings require the volume to be completely stopped and started to take effect. A simple suspend/resume is not sufficient to put these in place. Bruce, can you confirm this? Ok - this would be good to know if we have some of those that can be applied by just 'suspend/resume' n some that need full deactivation and activation. lvchange can print info message when various options do take effect - but I'd need to futher enhance code - so the internal API can give proper info to the user about when the change will happen for real. So - all changes always will happen with next activation ? Lvm2 can deactivate and activate VDO LV if it is unused - do we want this ? (or just message such like 'Change will apply with next activation...' is what we want) So here is the current list of implemented support: === VDOPOOL Lvchange Offline === ack_threads bio_rotation bio_threads block_map_cache_size_mb block_map_era_length block_map_period // alias for block_map_era_length cpu_threads hash_zone_threads logical_threads max_discard physical_threads === VDOPOOL Lvchange Online === use_compression use_deduplication === VDOPOOL NO Lvchange (only lvcreate/lvconvert) === check_point_frequency index_memory_size_mb minimum_io_size slab_size_mb use_metadata_hints use_sparse_index Supported syntax for --vdosettings option lvcreate --vdosettings 'vdo_cpu_threads=1'.... lvcreate --vdosettings 'cputhreads=1'.... Prefixes 'vdo_' & 'vdo_use_' can be skip - as well as any '_' in names. With upstream patch (man & tests included): https://listman.redhat.com/archives/lvm-devel/2022-May/024180.html All the issues in the 8.7 version of this rfe exist in rhel9.1 as well. https://bugzilla.redhat.com/show_bug.cgi?id=2100608#c12 https://bugzilla.redhat.com/show_bug.cgi?id=2100608#c13 https://bugzilla.redhat.com/show_bug.cgi?id=2100608#c14 https://bugzilla.redhat.com/show_bug.cgi?id=2100608#c15 Current upstream functionality should be matching documentation from comment 4. (lvs needs to follow documented full names of vdo settings). According to comment #4, "block_map_cache_size_mb" should now work. Is it your design that *ONLY* "block_map_cache_size_mb" works for setting? and none of the other variants work (like they do with the other attributes)? And that "block_map_cache_size_mb" *DOESN'T* work to view the attribute? *ONLY* "vdo_block_map_cache_size" works for viewing. So, the way to set will NOT work with viewing it, and the way to view it will NOT work with setting? [root@virt-499 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Devices vdo_lv vdo_sanity vwi-a-v--- 100.00g vdo_pool 0.00 vdo_pool(0) vdo_pool vdo_sanity dwi------- 10.00g 40.04 vdo_pool_vdata(0) [vdo_pool_vdata] vdo_sanity Dwi-ao---- 10.00g /dev/sda1(0) [root@virt-499 ~]# lvs -a -o +devices,block_map_cache_size_mb [...] Unrecognised field: block_map_cache_size_mb [root@virt-499 ~]# lvs -a -o +devices,block_map_cache_size [...] Unrecognised field: block_map_cache_size [root@virt-499 ~]# lvs -a -o +devices,vdo_block_map_cache_size_mb [...] Unrecognised field: vdo_block_map_cache_size_mb [root@virt-499 ~]# lvs -a -o +devices,vdo_block_map_cache_size LV VG Attr LSize Pool Origin Data% Devices VDOBlockMapCacheSize vdo_lv vdo_sanity vwi-a-v--- 100.00g vdo_pool 0.00 vdo_pool(0) 128.00m vdo_pool vdo_sanity dwi------- 10.00g 40.04 vdo_pool_vdata(0) 128.00m [vdo_pool_vdata] vdo_sanity Dwi-ao---- 10.00g /dev/sda1(0) [root@virt-499 ~]# vgchange -an vdo_sanity 0 logical volume(s) in volume group "vdo_sanity" now active [root@virt-499 ~]# lvchange --vdosettings 'vdo_block_map_cache_size_mb=256' vdo_sanity/vdo_lv Logical volume vdo_sanity/vdo_lv changed. [root@virt-499 ~]# lvchange --vdosettings 'block_map_cache_size_mb=256' vdo_sanity/vdo_lv Logical volume vdo_sanity/vdo_lv changed. [root@virt-499 ~]# lvchange --vdosettings 'block_map_cache_size=256' vdo_sanity/vdo_lv Unknown VDO setting "block_map_cache_size". [root@virt-499 ~]# lvchange --vdosettings 'vdo_block_map_cache_size=256' vdo_sanity/vdo_lv Unknown VDO setting "vdo_block_map_cache_size". Correct. The input side with _mb suffix is there to emphasize the input number is using 'fixed' MiB unit. lvs output side is on the other hand 'unit-free' and can be printed in any user-custom output format. Some form of 'aliasing' argument has not yet been designed. Comment #4 is about input parameters for --vdosettings option. Output parameters for 'lvs' are different as they have different capabilities. However the minor 'potential' here for user's confusion is seen. Marking Verified:Tested with the caveats listed in the other bugs to come from this one. kernel-5.14.0-332.el9 BUILT: Mon Jun 26 06:16:51 PM CEST 2023 lvm2-2.03.21-2.el9 BUILT: Thu May 25 12:03:04 AM CEST 2023 lvm2-libs-2.03.21-2.el9 BUILT: Thu May 25 12:03:04 AM CEST 2023 SCENARIO - offline_vdo_property_alteration: Create a vdo volume, deactivate it, then change the supported OFFLINE properties post creation (bug 2100608|2070777|2108239) OFFLINE vdo alteration properties/limits to attempt: ack_threads 100 bio_rotation 1024 bio_threads 100 block_map_cache_size_mb 16777215 block_map_era_length 16380 deactivating LV vdo_lv on virt-482.cluster-qe.lab.eng.brq.redhat.com lvchange --yes -an vdo_sanity/vdo_lv ---------------------------------------- [ack_threads] LIMIT attempt ack_threads setting for /dev/vdo_sanity/vdo_lv: 100 lvchange --vdosettings 'ack_threads=101' /dev/vdo_sanity/vdo_lv VDO ack threads 101 is out of range [0..100]. CURRENT vdo_ack_threads for /dev/vdo_sanity/vdo_lv: 1 lvchange --vdosettings 'ack_threads=2' /dev/vdo_sanity/vdo_lv Logical volume vdo_sanity/vdo_lv changed. ALTERED vdo_ack_threads setting for /dev/vdo_sanity/vdo_lv: 2 ---------------------------------------- [bio_rotation] LIMIT attempt bio_rotation setting for /dev/vdo_sanity/vdo_lv: 1024 lvchange --vdosettings 'bio_rotation=1025' /dev/vdo_sanity/vdo_lv VDO bio rotation 1025 is out of range [1..1024]. CURRENT vdo_bio_rotation for /dev/vdo_sanity/vdo_lv: 64 lvchange --vdosettings 'bio_rotation=65' /dev/vdo_sanity/vdo_lv Logical volume vdo_sanity/vdo_lv changed. ALTERED vdo_bio_rotation setting for /dev/vdo_sanity/vdo_lv: 65 ---------------------------------------- [bio_threads] LIMIT attempt bio_threads setting for /dev/vdo_sanity/vdo_lv: 100 lvchange --vdosettings 'bio_threads=101' /dev/vdo_sanity/vdo_lv VDO bio threads 101 is out of range [1..100]. CURRENT vdo_bio_threads for /dev/vdo_sanity/vdo_lv: 4 lvchange --vdosettings 'bio_threads=5' /dev/vdo_sanity/vdo_lv Logical volume vdo_sanity/vdo_lv changed. ALTERED vdo_bio_threads setting for /dev/vdo_sanity/vdo_lv: 5 ---------------------------------------- [block_map_cache_size_mb] LIMIT attempt block_map_cache_size_mb setting for /dev/vdo_sanity/vdo_lv: 16777215 lvchange --vdosettings 'block_map_cache_size_mb=16777216' /dev/vdo_sanity/vdo_lv VDO block map cache size 16777216 MiB is out of range [128..16777215]. CURRENT vdo_block_map_cache_size for /dev/vdo_sanity/vdo_lv: 128.00m lvchange --vdosettings 'block_map_cache_size_mb=129' /dev/vdo_sanity/vdo_lv Logical volume vdo_sanity/vdo_lv changed. ALTERED vdo_block_map_cache_size setting for /dev/vdo_sanity/vdo_lv: 129.00m ---------------------------------------- [block_map_era_length] LIMIT attempt block_map_era_length setting for /dev/vdo_sanity/vdo_lv: 16380 lvchange --vdosettings 'block_map_era_length=16381' /dev/vdo_sanity/vdo_lv VDO block map era length 16381 is out of range [1..16380]. CURRENT vdo_block_map_era_length for /dev/vdo_sanity/vdo_lv: 16380 lvchange --vdosettings 'block_map_era_length=16379' /dev/vdo_sanity/vdo_lv Logical volume vdo_sanity/vdo_lv changed. ALTERED vdo_block_map_era_length setting for /dev/vdo_sanity/vdo_lv: 16379 SCENARIO - online_vdo_property_alteration: Create a vdo volume, then change the supported ONLINE properties post creation (bug 2100608|2070777) lvconvert --yes --type vdo-pool -n vdo_lv -V100G vdo_sanity/vdo_pool The VDO volume can address 6 GB in 3 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/vdo_pool to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. WARNING: Converting logical volume vdo_sanity/vdo_pool to VDO pool volume with formatting. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) vdo alteration properties to attempt: [compression] PRE compression setting for /dev/vdo_sanity/vdo_lv: True Setting compression to False for vdo_sanity/vdo_lv lvchange --compression n vdo_sanity/vdo_pool Logical volume vdo_sanity/vdo_pool changed. POST1 compression setting for /dev/vdo_sanity/vdo_lv: False lvchange --vdosettings 'use_compression=1' /dev/vdo_sanity/vdo_lv Logical volume vdo_sanity/vdo_lv changed. POST2 compression setting for /dev/vdo_sanity/vdo_lv: enabled [deduplication] PRE deduplication setting for /dev/vdo_sanity/vdo_lv: True Setting deduplication to False for vdo_sanity/vdo_lv lvchange --deduplication n vdo_sanity/vdo_pool Logical volume vdo_sanity/vdo_pool changed. POST1 deduplication setting for /dev/vdo_sanity/vdo_lv: False lvchange --vdosettings 'use_deduplication=1' /dev/vdo_sanity/vdo_lv Logical volume vdo_sanity/vdo_lv changed. POST2 deduplication setting for /dev/vdo_sanity/vdo_lv: enabled SCENARIO - conversion_vdo_property_alteration: Create a pool volume, then change the supported CONVERSION properties during vdo conversion (bug 2100608|2070777|2108227|2108254#c2) Setting [create]: --vdosettings 'index_memory_size_mb=512' lvcreate --yes --type vdo -n index_memory_size_mb --vdosettings 'index_memory_size_mb=512' -L 10G vdo_sanity -V100G Wiping vdo signature on /dev/vdo_sanity/vpool0. The VDO volume can address 4 GB in 2 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "index_memory_size_mb" created. post vdo creation value:512.00m lvremove -f vdo_sanity/index_memory_size_mb Logical volume "index_memory_size_mb" successfully removed. Setting [convert]: --vdosettings 'index_memory_size_mb=512' lvcreate --yes --type linear -n pool_index_memory_size_mb -L 10G vdo_sanity Wiping vdo signature on /dev/vdo_sanity/pool_index_memory_size_mb. Logical volume "pool_index_memory_size_mb" created. lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'index_memory_size_mb=512' -V100G vdo_sanity/pool_index_memory_size_mb The VDO volume can address 4 GB in 2 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/pool_index_memory_size_mb to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. WARNING: Converting logical volume vdo_sanity/pool_index_memory_size_mb to VDO pool volume with formatting. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) post vdo conversion value:512.00m lvremove -f vdo_sanity/vdo_lv Logical volume "vdo_lv" successfully removed. ---------------------------------------- Setting [create]: --vdosettings 'slab_size_mb=4000' lvcreate --yes --type vdo -n slab_size_mb --vdosettings 'slab_size_mb=4000' -L 10G vdo_sanity -V100G Wiping vdo signature on /dev/vdo_sanity/vpool0. The VDO volume can address 6 GB in 3 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "slab_size_mb" created. post vdo creation value:<3.91g lvremove -f vdo_sanity/slab_size_mb Logical volume "slab_size_mb" successfully removed. Setting [convert]: --vdosettings 'slab_size_mb=4000' lvcreate --yes --type linear -n pool_slab_size_mb -L 10G vdo_sanity Wiping vdo signature on /dev/vdo_sanity/pool_slab_size_mb. Logical volume "pool_slab_size_mb" created. lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'slab_size_mb=4000' -V100G vdo_sanity/pool_slab_size_mb The VDO volume can address 6 GB in 3 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/pool_slab_size_mb to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. WARNING: Converting logical volume vdo_sanity/pool_slab_size_mb to VDO pool volume with formatting. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) post vdo conversion value:<3.91g lvremove -f vdo_sanity/vdo_lv Logical volume "vdo_lv" successfully removed. ---------------------------------------- Setting [create]: --vdosettings 'use_sparse_index=0' lvcreate --yes --type vdo -n use_sparse_index --vdosettings 'use_sparse_index=0' -L 10G vdo_sanity -V100G Wiping vdo signature on /dev/vdo_sanity/vpool0. The VDO volume can address 6 GB in 3 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "use_sparse_index" created. post vdo creation value: lvremove -f vdo_sanity/use_sparse_index Logical volume "use_sparse_index" successfully removed. Setting [convert]: --vdosettings 'use_sparse_index=0' lvcreate --yes --type linear -n pool_use_sparse_index -L 10G vdo_sanity Wiping vdo signature on /dev/vdo_sanity/pool_use_sparse_index. Logical volume "pool_use_sparse_index" created. lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'use_sparse_index=0' -V100G vdo_sanity/pool_use_sparse_index The VDO volume can address 6 GB in 3 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/pool_use_sparse_index to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. WARNING: Converting logical volume vdo_sanity/pool_use_sparse_index to VDO pool volume with formatting. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) post vdo conversion value: lvremove -f vdo_sanity/vdo_lv Logical volume "vdo_lv" successfully removed. ---------------------------------------- Setting [create]: --vdosettings 'minimum_io_size=8' lvcreate --yes --type vdo -n minimum_io_size --vdosettings 'minimum_io_size=8' -L 10G vdo_sanity -V100G Wiping vdo signature on /dev/vdo_sanity/vpool0. The VDO volume can address 6 GB in 3 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "minimum_io_size" created. post vdo creation value:4.00k lvremove -f vdo_sanity/minimum_io_size Logical volume "minimum_io_size" successfully removed. Setting [convert]: --vdosettings 'minimum_io_size=8' lvcreate --yes --type linear -n pool_minimum_io_size -L 10G vdo_sanity Wiping vdo signature on /dev/vdo_sanity/pool_minimum_io_size. Logical volume "pool_minimum_io_size" created. lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'minimum_io_size=8' -V100G vdo_sanity/pool_minimum_io_size The VDO volume can address 6 GB in 3 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/pool_minimum_io_size to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. WARNING: Converting logical volume vdo_sanity/pool_minimum_io_size to VDO pool volume with formatting. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) post vdo conversion value:4.00k lvremove -f vdo_sanity/vdo_lv Logical volume "vdo_lv" successfully removed. ---------------------------------------- Setting [create]: --vdosettings 'metadata_hints=0' lvcreate --yes --type vdo -n metadata_hints --vdosettings 'metadata_hints=0' -L 10G vdo_sanity -V100G Wiping vdo signature on /dev/vdo_sanity/vpool0. The VDO volume can address 6 GB in 3 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "metadata_hints" created. post vdo creation value: lvremove -f vdo_sanity/metadata_hints Logical volume "metadata_hints" successfully removed. Setting [convert]: --vdosettings 'metadata_hints=0' lvcreate --yes --type linear -n pool_metadata_hints -L 10G vdo_sanity Wiping vdo signature on /dev/vdo_sanity/pool_metadata_hints. Logical volume "pool_metadata_hints" created. lvconvert --yes --type vdo-pool -n vdo_lv --vdosettings 'metadata_hints=0' -V100G vdo_sanity/pool_metadata_hints The VDO volume can address 6 GB in 3 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/pool_metadata_hints to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. WARNING: Converting logical volume vdo_sanity/pool_metadata_hints to VDO pool volume with formatting. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) post vdo conversion value: Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:6633 |