Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1683950

Summary: vdo pool on top of raid1 does not survive rename
Product: Red Hat Enterprise Linux 8 Reporter: Roman Bednář <rbednar>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: VDO QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: high CC: agk, awalsh, cmarthal, heinzm, jbrassow, mcsontos, msnitzer, pasik, prajnoha, zkabelac
Version: 8.0Keywords: Triaged
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: lvm2-2.03.12-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-11-09 19:45:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1881955, 1888419    
Bug Blocks:    

Description Roman Bednář 2019-02-28 08:00:05 UTC
This is a bug for 8.0 TechPreview (BZ 1638522), bump to next release if needed.


#### Create a raid1 volume

# lvcreate --type raid1 -L4G vg
WARNING: vdo signature detected on /dev/vg/lvol0_rmeta_0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/vg/lvol0_rmeta_0.
lv  Logical volume "lvol0" created.

# lvs -a
  LV               VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root             rhel_virt-122 -wi-ao----  <6.20g
  swap             rhel_virt-122 -wi-ao---- 820.00m
  lvol0            vg            rwi-a-r---   4.00g                                    0.00
  [lvol0_rimage_0] vg            Iwi-aor---   4.00g
  [lvol0_rimage_1] vg            Iwi-aor---   4.00g
  [lvol0_rmeta_0]  vg            ewi-aor---   4.00m
  [lvol0_rmeta_1]  vg            ewi-aor---   4.00m

#### Convert it to native VDO volume and pool

# lvconvert --vdopool vg/lvol0 -V10G
  WARNING: Converting logical volume lvol0 to VDO pool volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert lvol0? [y/n]: y
  device-mapper: remove ioctl on  (253:6) failed: Device or resource busy
  Logical volume "lvol1" created.
  Converted vg/lvol0 to VDO pool volume and created virtual vg/lvol1 VDO volume.


#### Check lvs layout, looking good

# lvs -a
  LV                     VG            Attr       LSize   Pool  Origin Data%  
  root                   rhel_virt-122 -wi-ao----  <6.20g
  swap                   rhel_virt-122 -wi-ao---- 820.00m
  lvol0                  vg            dwi-ao----   4.00g              75.05
  [lvol0_vdata]          vg            rwi-aor---   4.00g                                     100.00
  [lvol0_vdata_rimage_0] vg            iwi-aor---   4.00g
  [lvol0_vdata_rimage_1] vg            iwi-aor---   4.00g
  [lvol0_vdata_rmeta_0]  vg            ewi-aor---   4.00m
  [lvol0_vdata_rmeta_1]  vg            ewi-aor---   4.00m
  lvol1                  vg            vwi-a-v---  10.00g lvol0        0.00


#### Rename lvol0 (vdo pool)

# lvrename vg/lvol0 vg/vdo_pool
  Renamed "lvol0" to "vdo_pool" in volume group “vg"


#### Check lvs layout with unknown errors (X)

# lvs -a
  LV                        VG            Attr       LSize   Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root                      rhel_virt-122 -wi-ao----  <6.20g
  swap                      rhel_virt-122 -wi-ao---- 820.00m
  lvol1                     vg            vwi-XXv-X-  10.00g vdo_pool
  vdo_pool                  vg            dwi-XX--X-   4.00g
  [vdo_pool_vdata]          vg            rwi-aor---   4.00g                                        100.00
  [vdo_pool_vdata_rimage_0] vg            iwi-aor---   4.00g
  [vdo_pool_vdata_rimage_1] vg            iwi-aor---   4.00g
  [vdo_pool_vdata_rmeta_0]  vg            ewi-aor---   4.00m
  [vdo_pool_vdata_rmeta_1]  vg            ewi-aor---   4.00m




Also trying to remove vdo volume converted from raid1 (without rename) shows confusing message:

# lvconvert --vdopool vg/lvol0 -V10G
  WARNING: Converting logical volume lvol0 to VDO pool volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert lvol0? [y/n]: y
  device-mapper: remove ioctl on  (253:6) failed: Device or resource busy
  Logical volume "lvol1" created.
  Converted vg/lvol0 to VDO pool volume and created virtual vg/lvol1 VDO volume.

# lvremove -y vg/lvol0
  Logical volume "lvol1" successfully removed

Both vdo pool and vdo volume is removed in fact but we’re informed only about removal of one (different) device.

lvm2-2.03.02-6.el8.x86_64

Comment 1 Zdenek Kabelac 2020-09-23 13:48:47 UTC
So this will require kernel patch enhancement  (opened upstream bug #1881955)

So for this moment users may not do online lvrenames of VDOPOOL LVs.

(enforced with upstream patch: 
https://www.redhat.com/archives/lvm-devel/2020-September/msg00143.html)

As a workaround user needs to deactivate  VDO & VDOPOOL first, lvrename 
and activate again.

Comment 9 Zdenek Kabelac 2021-01-22 21:49:29 UTC
Pushed upstream enhancements:

main change:
https://www.redhat.com/archives/lvm-devel/2021-January/msg00019.html

Introduces  'vdo_disabled_features' to eventually disable new 'online_rename' feature supported with new  kvdo  6.2.3
With older kvdo module the online rename remains disabled.

associated fixes:
https://www.redhat.com/archives/lvm-devel/2021-January/msg00017.html

fixes removal of _pmspare - when VDOpool is cached - without the fix 'vgremove' on such VG can fail with assert().
https://www.redhat.com/archives/lvm-devel/2021-January/msg00020.html


Tested with:

https://www.redhat.com/archives/lvm-devel/2021-January/msg00018.html

Checks rename with cached and raid  VDO pool volume.

Comment 13 Corey Marthaler 2021-06-02 16:31:45 UTC
The original scenario listed in comment #0 no longer appears to cause "unknown errors (X)". Marking Verified:Tested in the latest rpms.

kernel-4.18.0-310.el8    BUILT: Thu May 27 14:24:00 CDT 2021
lvm2-2.03.12-2.el8    BUILT: Tue Jun  1 06:55:37 CDT 2021
lvm2-libs-2.03.12-2.el8    BUILT: Tue Jun  1 06:55:37 CDT 2021


[root@hayes-01 ~]# lvcreate --type raid1 -L40G vg
  Logical volume "lvol0" created.
[root@hayes-01 ~]# lvs -a
  LV               VG Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol0            vg rwi-a-r--- 40.00g                                    4.61            
  [lvol0_rimage_0] vg Iwi-aor--- 40.00g                                                    
  [lvol0_rimage_1] vg Iwi-aor--- 40.00g                                                    
  [lvol0_rmeta_0]  vg ewi-aor---  4.00m                                                    
  [lvol0_rmeta_1]  vg ewi-aor---  4.00m                                                    

[root@hayes-01 ~]#  lvconvert --vdopool vg/lvol0 -V10G
  WARNING: Converting logical volume vg/lvol0 to VDO pool volume with formating.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert vg/lvol0? [y/n]: y
    The VDO volume can address 36 GB in 18 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol1" created.
  Converted vg/lvol0 to VDO pool volume and created virtual vg/lvol1 VDO volume.
[root@hayes-01 ~]# lvs -a
  LV                     VG Attr       LSize  Pool  Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol0                  vg dwi------- 40.00g              10.06                                  
  [lvol0_vdata]          vg rwi-aor--- 40.00g                                     18.84           
  [lvol0_vdata_rimage_0] vg iwi-aor--- 40.00g                                                     
  [lvol0_vdata_rimage_1] vg iwi-aor--- 40.00g                                                     
  [lvol0_vdata_rmeta_0]  vg ewi-aor---  4.00m                                                     
  [lvol0_vdata_rmeta_1]  vg ewi-aor---  4.00m                                                     
  lvol1                  vg vwi-a-v--- 10.00g lvol0        0.00                                   

[root@hayes-01 ~]# lvrename vg/lvol0 vg/vdo_pool
  Renamed "lvol0" to "vdo_pool" in volume group "vg"
[root@hayes-01 ~]# lvs -a
  LV                        VG Attr       LSize  Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol1                     vg vwi-a-v--- 10.00g vdo_pool        0.00                                   
  vdo_pool                  vg dwi------- 40.00g                 10.06                                  
  [vdo_pool_vdata]          vg rwi-aor--- 40.00g                                        34.16           
  [vdo_pool_vdata_rimage_0] vg Iwi-aor--- 40.00g                                                        
  [vdo_pool_vdata_rimage_1] vg Iwi-aor--- 40.00g                                                        
  [vdo_pool_vdata_rmeta_0]  vg ewi-aor---  4.00m                                                        
  [vdo_pool_vdata_rmeta_1]  vg ewi-aor---  4.00m                                                        

[root@hayes-01 ~]# lvconvert --vdopool vg/lvol0 -V10G
  Failed to find logical volume "vg/lvol0"
[root@hayes-01 ~]# lvconvert --vdopool vg/lvol1 -V10G
  Command on LV vg/lvol1 is invalid on LV with properties: lv_is_virtual .
  Command not permitted on LV vg/lvol1.
[root@hayes-01 ~]# lvs
  LV       VG Attr       LSize  Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol1    vg vwi-a-v--- 10.00g vdo_pool        0.00                                   
  vdo_pool vg dwi------- 40.00g                 10.06                                  
[root@hayes-01 ~]# lvremove -y vg/lvol0
  Failed to find logical volume "vg/lvol0"
[root@hayes-01 ~]# lvs
  LV       VG Attr       LSize  Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol1    vg vwi-a-v--- 10.00g vdo_pool        0.00                                   
  vdo_pool vg dwi------- 40.00g                 10.06                                  
[root@hayes-01 ~]# lvremove -y vg/lvol1
  Logical volume "lvol1" successfully removed.
[root@hayes-01 ~]# lvs
[root@hayes-01 ~]#

Comment 16 errata-xmlrpc 2021-11-09 19:45:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4431