Bug 1989650

Summary: "vdoimport --dry-run" is broken
Product: Red Hat Enterprise Linux 8 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: VDO QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: urgent CC: agk, awalsh, heinzm, jbrassow, lmiksik, mcsontos, prajnoha, zkabelac
Version: 8.5Keywords: Triaged
Target Milestone: beta   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.03.12-9.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2005005 (view as bug list) Environment:
Last Closed: 2021-11-09 19:45:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1930261, 2005005    

Description Corey Marthaler 2021-08-03 16:37:41 UTC
Description of problem:
[root@hayes-03 ~]# vdoimport -h
vdoimport: Utility to convert VDO volume to VDO LV.

        vdoimport [options] <vdo_device_path>

        Options:
               --dry-run      Print commands without running them


[root@hayes-01 ~]# vdo list
vdoimport_sanity

[root@hayes-01 ~]# grep device /etc/vdoconf.yml
      device: /dev/disk/by-id/scsi-36d094660575ece002291be7e6227ca72-part1

[root@hayes-01 ~]# vdoimport --dry-run
/usr/sbin/vdoimport: line 376: DEVICENAME: unbound variable


[root@hayes-01 ~]# vdoimport --dry-run /dev/disk/by-id/scsi-36d094660575ece002291be7e6227ca72-part1
Device does not exist.
Command failed.
  Volume group "vdovg" not found
  Cannot process volume group vdovg


Version-Release number of selected component (if applicable):
lvm2-2.03.12-5.el8    BUILT: Tue Jul 13 11:50:03 CDT 2021
lvm2-libs-2.03.12-5.el8    BUILT: Tue Jul 13 11:50:03 CDT 2021
vdo-6.2.5.65-14.el8    BUILT: Thu Jul 22 12:56:43 CDT 2021
kmod-kvdo-6.2.5.65-79.el8    BUILT: Thu Jul 22 12:59:43 CDT 2021

Comment 1 Zdenek Kabelac 2021-08-31 20:11:03 UTC
Pushed https://listman.redhat.com/archives/lvm-devel/2021-August/msg00044.html

Should improve also --dry-run behavior.

Comment 6 Corey Marthaler 2021-09-14 19:39:55 UTC
The error when no device is given is improved, but nothing happens when a device is given. What is the expected behavior here with the latest scratch build?

lvm2-2.03.12-9.el8    BUILT: Tue Sep 14 09:53:56 CDT 2021
lvm2-libs-2.03.12-9.el8    BUILT: Tue Sep 14 09:53:56 CDT 2021


[root@hayes-03 ~]# vdo list
lvm_import_vdo_sanity

[root@hayes-03 ~]# lvm_import_vdo --yes --dry-run
lvm_import_vdo: Device name is not specified. (see: lvm_import_vdo --help)

[root@hayes-03 ~]# lvm_import_vdo --yes --dry-run /dev/sdd1
[root@hayes-03 ~]# echo $?
0

[root@hayes-03 ~]# lvs
[root@hayes-03 ~]# vdo list
lvm_import_vdo_sanity

Comment 7 Zdenek Kabelac 2021-09-15 12:42:08 UTC
Execution of --dry-run is likely 'the most useful'  with   --verbose option  - i.e.

lvm_import_vdo --yes --dry-run --verbose /dev/sdd1

so the user could observe what would be actually execute (main purpose of 'dry-run')

We might adapt this logic - so in case of invoke of '--dry-run' we could decorate progress with some 'extra' info  as likely 'no complains & no errors' might be not taken as understandable outcome of 'no log result'.

This also gives us option to automatically make it 'verbose' with 'dry-run' ?

Comment 8 Corey Marthaler 2021-09-15 19:10:52 UTC
Using --verbose along with --dry-run is far more helpful. That should probably be the default, that or update the man page to tell the user to use --verbose, since right now it's incorrect.

       --dry-run
              Print commands without running them.


[root@hayes-03 ~]# lvm_import_vdo --yes --dry-run /dev/sdd1

[root@hayes-03 ~]# lvm_import_vdo --yes --dry-run --verbose /dev/sdd1
lvm_import_vdo: Checked whether device /dev/sdd1 is already LV (0).
lvm_import_vdo: Getting YAML VDO configuration.
lvm_import_vdo: Found matching device /dev/disk/by-id/scsi-36d09466083d8e100233c17f9212bf63e-part1  8:49
lvm_import_vdo: Converted VDO device has logical/physical size 524288000/468320216 KiB.
lvm_import_vdo: VDO conversion paramaters: allocation {
        vdo_use_compression = 1
        vdo_use_deduplication = 1
        vdo_use_metadata_hints=1
        vdo_minimum_io_size = 4096
        vdo_block_map_cache_size_mb = 128
        vdo_block_map_period = 16380
        vdo_check_point_frequency = 0
        vdo_use_sparse_index = 0
        vdo_index_memory_size_mb = 256
        vdo_slab_size_mb = 128
        vdo_ack_threads = 1
        vdo_bio_threads = 4
        vdo_bio_rotation = 64
        vdo_cpu_threads = 2
        vdo_hash_zone_threads = 1
        vdo_logical_threads = 1
        vdo_physical_threads = 1
        vdo_write_policy = auto
        vdo_max_discard = 1
        vdo_pool_header_size = 0
}
lvm_import_vdo: Stopping VDO volume.
lvm_import_vdo: Dry execution vdo stop --name lvm_import_vdo_sanity
lvm_import_vdo: Moving VDO header by 2MiB.
lvm_import_vdo: Dry execution vdo convert --force --name lvm_import_vdo_sanity
lvm_import_vdo: Dry execution lvm pvcreate -y --dataalignment 2M /dev/sdd1
lvm_import_vdo: Creating VG "" with extent size 8 KiB.
lvm_import_vdo: Dry execution lvm vgcreate -y -v -s 8k vdovg /dev/sdd1
lvm_import_vdo: Creating VDO pool data LV from all extents in volume group vdovg.
lvm_import_vdo: Dry execution lvm lvcreate -Zn -Wn -y -v -l100%VG -n vdolvol_vpool vdovg
lvm_import_vdo: Converting to VDO pool.
lvm_import_vdo: Dry execution lvm lvconvert -y -v --config allocation {
        vdo_use_compression = 1
        vdo_use_deduplication = 1
        vdo_use_metadata_hints=1
        vdo_minimum_io_size = 4096
        vdo_block_map_cache_size_mb = 128
        vdo_block_map_period = 16380
        vdo_check_point_frequency = 0
        vdo_use_sparse_index = 0
        vdo_index_memory_size_mb = 256
        vdo_slab_size_mb = 128
        vdo_ack_threads = 1
        vdo_bio_threads = 4
        vdo_bio_rotation = 64
        vdo_cpu_threads = 2
        vdo_hash_zone_threads = 1
        vdo_logical_threads = 1
        vdo_physical_threads = 1
        vdo_write_policy = auto
        vdo_max_discard = 1
        vdo_pool_header_size = 0
} -Zn -V 524288000k -n vdolvol --type vdo-pool vdovg/vdolvol_vpool

Comment 10 Corey Marthaler 2021-09-20 17:14:59 UTC
Are we going to leave this as is for 8.5 then? The user will need --verbose in order to get useful information?

[root@hayes-03 ~]# lvm_import_vdo --yes --dry-run /dev/sdd1
[root@hayes-03 ~]# 

lvm2-2.03.12-10.el8.x86_64

Comment 12 Jonathan Earl Brassow 2021-09-22 14:20:35 UTC
(In reply to Corey Marthaler from comment #10)
> Are we going to leave this as is for 8.5 then? The user will need --verbose
> in order to get useful information?
> 
> [root@hayes-03 ~]# lvm_import_vdo --yes --dry-run /dev/sdd1
> [root@hayes-03 ~]# 
> 
> lvm2-2.03.12-10.el8.x86_64

Yes, that is correct.  The user will need to supply the '--verbose' flag for extra information.

A new bug for 8.6 will be needed to add verbose output by default.

Comment 13 Corey Marthaler 2021-09-22 14:23:03 UTC
Marking Verified:Tested with the latest rpms based on comment #10 and #12.

lvm2-2.03.12-10.el8    BUILT: Mon Sep 20 03:30:20 CDT 2021
lvm2-libs-2.03.12-10.el8    BUILT: Mon Sep 20 03:30:20 CDT 2021

Comment 16 Corey Marthaler 2021-09-27 16:15:46 UTC
Marking VERIFIED in the latest rpms/kernel

kernel-4.18.0-345.el8    BUILT: Thu Sep 23 18:34:50 CDT 2021
lvm2-2.03.12-10.el8    BUILT: Mon Sep 20 03:30:20 CDT 2021
lvm2-libs-2.03.12-10.el8    BUILT: Mon Sep 20 03:30:20 CDT 2021
vdo-6.2.5.74-14.el8    BUILT: Fri Aug 20 17:56:40 CDT 2021
kmod-kvdo-6.2.5.72-80.el8    BUILT: Fri Aug 27 10:26:23 CDT 2021


SCENARIO - [vdo_convert_dry_run]
Test the various dry-run conversion options of an existent vdo volume to lvm (bug 1989650)
vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/sdc1
Creating VDO lvm_import_vdo_sanity
      The VDO volume can address 1 TB in 929 data slabs, each 2 GB.
      It can grow to address at most 16 TB of physical storage in 8192 slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO lvm_import_vdo_sanity
Starting compression on VDO lvm_import_vdo_sanity
VDO instance 0 volume is ready at /dev/mapper/lvm_import_vdo_sanity

lvm_import_vdo --dry-run --yes /dev/sdc1
lvm_import_vdo --yes --dry-run --verbose /dev/sdc1

vdo remove --name lvm_import_vdo_sanity
Removing VDO lvm_import_vdo_sanity
Stopping VDO lvm_import_vdo_sanity

SCENARIO - [vdo_convert_dry_run_failure_attempts]
Test the various dry-run conversion options in given invalid devices
lvm_import_vdo --yes --dry-run

lvm_import_vdo --dry-run --yes /dev/sdb1

vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/sdc1
Creating VDO lvm_import_vdo_sanity
      The VDO volume can address 1 TB in 929 data slabs, each 2 GB.
      It can grow to address at most 16 TB of physical storage in 8192 slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO lvm_import_vdo_sanity
Starting compression on VDO lvm_import_vdo_sanity
VDO instance 1 volume is ready at /dev/mapper/lvm_import_vdo_sanity

lvm_import_vdo --dry-run --yes /dev/sdb1

vdo stop --name lvm_import_vdo_sanity
lvm_import_vdo --dry-run --yes /dev/sdc1
vdo remove --name lvm_import_vdo_sanity
vdo: WARNING - VDO service lvm_import_vdo_sanity already stopped
Removing VDO lvm_import_vdo_sanity
Stopping VDO lvm_import_vdo_sanity

Comment 18 errata-xmlrpc 2021-11-09 19:45:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4431