Bug 2005004
Summary: | what's the plan if a script or user fails to answer "y" to the vdoimport "convert vdovg/vdolvol_vpool" question? [rhel-8.6.0] | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | RHEL Program Management Team <pgm-rhel-tools> |
Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
lvm2 sub component: | VDO | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | urgent | CC: | agk, awalsh, cmarthal, heinzm, jbrassow, lmiksik, mcsontos, prajnoha, zkabelac |
Version: | 8.5 | Keywords: | Triaged |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.03.14-1.el8 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1988504 | Environment: | |
Last Closed: | 2022-05-10 15:22:14 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1988504 | ||
Bug Blocks: |
Comment 1
Zdenek Kabelac
2021-09-22 14:34:57 UTC
Marking Verifed:Tested in the latest rpms. kernel-4.18.0-348.4.el8.kpq0 BUILT: Wed Oct 27 15:00:32 CDT 2021 lvm2-2.03.14-1.el8 BUILT: Wed Oct 20 10:18:17 CDT 2021 lvm2-libs-2.03.14-1.el8 BUILT: Wed Oct 20 10:18:17 CDT 2021 lvm2-dbusd-2.03.14-1.el8 BUILT: Wed Oct 20 10:18:48 CDT 2021 SCENARIO - [attempt_no_flag_answer_to_lvm_import_vdo_convert] Test the proceedure for answering no to the lvm_import_vdo conversion question midway through the process (1988504) vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/sdd1 Creating VDO lvm_import_vdo_sanity The VDO volume can address 928 GB in 464 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Starting VDO lvm_import_vdo_sanity Starting compression on VDO lvm_import_vdo_sanity VDO instance 154 volume is ready at /dev/mapper/lvm_import_vdo_sanity echo n | lvm_import_vdo /dev/disk/by-id/scsi-36d094660650d1e0022bd29f31e631f3e-part1 Convert VDO device "/dev/disk/by-id/scsi-36d094660650d1e0022bd29f31e631f3e-part1" to VDO LV "vdovg/vdolvol"? [y|N]: No lvm_import_vdo properly did not complete (and errored non zero) when answering no vdo remove --name lvm_import_vdo_sanity Removing VDO lvm_import_vdo_sanity Stopping VDO lvm_import_vdo_sanity Marking Verified in the latest rpms. kernel-4.18.0-348.4.el8.kpq0 BUILT: Wed Oct 27 15:00:32 CDT 2021 lvm2-2.03.14-1.el8 BUILT: Wed Oct 20 10:18:17 CDT 2021 lvm2-libs-2.03.14-1.el8 BUILT: Wed Oct 20 10:18:17 CDT 2021 SCENARIO - attempt_no_flag_answer_to_lvm_import_vdo_convert: Test the proceedure for answering no to the lvm_import_vdo conversion question midway through the process (1988504) vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/sdd1 Creating VDO lvm_import_vdo_sanity The VDO volume can address 1 TB in 929 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Starting VDO lvm_import_vdo_sanity Starting compression on VDO lvm_import_vdo_sanity VDO instance 5 volume is ready at /dev/mapper/lvm_import_vdo_sanity echo n | lvm_import_vdo /dev/disk/by-id/scsi-36d094660575ece002291ba5c230d16c6-part1 Convert VDO device "/dev/disk/by-id/scsi-36d094660575ece002291ba5c230d16c6-part1" to VDO LV "vdovg/vdolvol"? [y|N]: No lvm_import_vdo properly did not complete (and errored non zero) when answering no vdo remove --name lvm_import_vdo_sanity Removing VDO lvm_import_vdo_sanity Stopping VDO lvm_import_vdo_sanity Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:2038 |