Bug 1683950
| Summary: | vdo pool on top of raid1 does not survive rename | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Roman Bednář <rbednar> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | VDO | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | unspecified | ||
| Priority: | high | CC: | agk, awalsh, cmarthal, heinzm, jbrassow, mcsontos, msnitzer, pasik, prajnoha, zkabelac |
| Version: | 8.0 | Keywords: | Triaged |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
| Target Release: | 8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.03.12-1.el8 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-11-09 19:45:20 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1881955, 1888419 | ||
| Bug Blocks: | |||
|
Description
Roman Bednář
2019-02-28 08:00:05 UTC
So this will require kernel patch enhancement (opened upstream bug #1881955) So for this moment users may not do online lvrenames of VDOPOOL LVs. (enforced with upstream patch: https://www.redhat.com/archives/lvm-devel/2020-September/msg00143.html) As a workaround user needs to deactivate VDO & VDOPOOL first, lvrename and activate again. Pushed upstream enhancements: main change: https://www.redhat.com/archives/lvm-devel/2021-January/msg00019.html Introduces 'vdo_disabled_features' to eventually disable new 'online_rename' feature supported with new kvdo 6.2.3 With older kvdo module the online rename remains disabled. associated fixes: https://www.redhat.com/archives/lvm-devel/2021-January/msg00017.html fixes removal of _pmspare - when VDOpool is cached - without the fix 'vgremove' on such VG can fail with assert(). https://www.redhat.com/archives/lvm-devel/2021-January/msg00020.html Tested with: https://www.redhat.com/archives/lvm-devel/2021-January/msg00018.html Checks rename with cached and raid VDO pool volume. The original scenario listed in comment #0 no longer appears to cause "unknown errors (X)". Marking Verified:Tested in the latest rpms. kernel-4.18.0-310.el8 BUILT: Thu May 27 14:24:00 CDT 2021 lvm2-2.03.12-2.el8 BUILT: Tue Jun 1 06:55:37 CDT 2021 lvm2-libs-2.03.12-2.el8 BUILT: Tue Jun 1 06:55:37 CDT 2021 [root@hayes-01 ~]# lvcreate --type raid1 -L40G vg Logical volume "lvol0" created. [root@hayes-01 ~]# lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvol0 vg rwi-a-r--- 40.00g 4.61 [lvol0_rimage_0] vg Iwi-aor--- 40.00g [lvol0_rimage_1] vg Iwi-aor--- 40.00g [lvol0_rmeta_0] vg ewi-aor--- 4.00m [lvol0_rmeta_1] vg ewi-aor--- 4.00m [root@hayes-01 ~]# lvconvert --vdopool vg/lvol0 -V10G WARNING: Converting logical volume vg/lvol0 to VDO pool volume with formating. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert vg/lvol0? [y/n]: y The VDO volume can address 36 GB in 18 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "lvol1" created. Converted vg/lvol0 to VDO pool volume and created virtual vg/lvol1 VDO volume. [root@hayes-01 ~]# lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvol0 vg dwi------- 40.00g 10.06 [lvol0_vdata] vg rwi-aor--- 40.00g 18.84 [lvol0_vdata_rimage_0] vg iwi-aor--- 40.00g [lvol0_vdata_rimage_1] vg iwi-aor--- 40.00g [lvol0_vdata_rmeta_0] vg ewi-aor--- 4.00m [lvol0_vdata_rmeta_1] vg ewi-aor--- 4.00m lvol1 vg vwi-a-v--- 10.00g lvol0 0.00 [root@hayes-01 ~]# lvrename vg/lvol0 vg/vdo_pool Renamed "lvol0" to "vdo_pool" in volume group "vg" [root@hayes-01 ~]# lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvol1 vg vwi-a-v--- 10.00g vdo_pool 0.00 vdo_pool vg dwi------- 40.00g 10.06 [vdo_pool_vdata] vg rwi-aor--- 40.00g 34.16 [vdo_pool_vdata_rimage_0] vg Iwi-aor--- 40.00g [vdo_pool_vdata_rimage_1] vg Iwi-aor--- 40.00g [vdo_pool_vdata_rmeta_0] vg ewi-aor--- 4.00m [vdo_pool_vdata_rmeta_1] vg ewi-aor--- 4.00m [root@hayes-01 ~]# lvconvert --vdopool vg/lvol0 -V10G Failed to find logical volume "vg/lvol0" [root@hayes-01 ~]# lvconvert --vdopool vg/lvol1 -V10G Command on LV vg/lvol1 is invalid on LV with properties: lv_is_virtual . Command not permitted on LV vg/lvol1. [root@hayes-01 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvol1 vg vwi-a-v--- 10.00g vdo_pool 0.00 vdo_pool vg dwi------- 40.00g 10.06 [root@hayes-01 ~]# lvremove -y vg/lvol0 Failed to find logical volume "vg/lvol0" [root@hayes-01 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvol1 vg vwi-a-v--- 10.00g vdo_pool 0.00 vdo_pool vg dwi------- 40.00g 10.06 [root@hayes-01 ~]# lvremove -y vg/lvol1 Logical volume "lvol1" successfully removed. [root@hayes-01 ~]# lvs [root@hayes-01 ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:4431 |