RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1095843 - LVM cache: Not all sub-LVs renamed when caching a RAID LV
Summary: LVM cache: Not all sub-LVs renamed when caching a RAID LV
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: BrassowRHEL6Bugs 1103381
TreeView+ depends on / blocked
 
Reported: 2014-05-08 16:04 UTC by Jonathan Earl Brassow
Modified: 2014-10-14 08:25 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.107-1.el6
Doc Type: Bug Fix
Doc Text:
No documentation needed.
Clone Of:
: 1103381 (view as bug list)
Environment:
Last Closed: 2014-10-14 08:25:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1387 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2014-10-14 01:39:47 UTC

Description Jonathan Earl Brassow 2014-05-08 16:04:46 UTC
When caching a RAID LV, the sub-LVs of that RAID do not get properly renamed.

[root@bp-01 ~]# lvcreate --type raid1 -L 10G --nosync -n lv vg
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
  Logical volume "lv" created
[root@bp-01 ~]# devices vg
  LV            Attr       Cpy%Sync Devices                      
  lv            Rwi-a-r---   100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0] iwi-aor---          /dev/sdb1(1)                 
  [lv_rimage_1] iwi-aor---          /dev/sdc1(1)                 
  [lv_rmeta_0]  ewi-aor---          /dev/sdb1(0)                 
  [lv_rmeta_1]  ewi-aor---          /dev/sdc1(0)                 
[root@bp-01 ~]# lvcreate --type cache -L 1G -n lv_cachepool vg/lv
  Logical volume "lvol0" created
  Logical volume "lv" created
[root@bp-01 ~]# devices vg
  LV                   Attr       Cpy%Sync Devices                      
  lv                   Cwi-a-C---          lv_corig(0)                  
  lv_cachepool         Cwi---C---          lv_cachepool_cdata(0)        
  [lv_cachepool_cdata] Cwi---C---          /dev/sdb1(2563)              
  [lv_cachepool_cmeta] ewi---C---          /dev/sdb1(2562)              
  [lv_corig]           rwi---r---          lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0]        Iwi-aor-r-          /dev/sdb1(1)                 
  [lv_rimage_1]        Iwi-aor-r-          /dev/sdc1(1)                 
  [lv_rmeta_0]         ewi-aor-r-          /dev/sdb1(0)                 
  [lv_rmeta_1]         ewi-aor-r-          /dev/sdc1(0)                 
  [lvol0_pmspare]      ewi-------          /dev/sdb1(2561)

Comment 2 Corey Marthaler 2014-06-05 20:32:56 UTC
Just to be clear, the sub vols should be renamed to 'lv_corig_rimage_[01]', much like raid pools are renamed to '_cdata_rimage_[01]'?

[root@host-073 ~]# lvcreate -n O -L 400M --type raid1 -m 1 vg
  Logical volume "O" created
[root@host-073 ~]# lvcreate -n P -L 100M vg
  Logical volume "P" created

[root@host-073 ~]# lvs -a -o +devices
LV           Attr       LSize   Pool Origin Cpy%Sync Devices
O            rwi-a-r--- 400.00m               100.00 O_rimage_0(0),O_rimage_1(0)
[O_rimage_0] iwi-aor--- 400.00m                      /dev/sda1(1)
[O_rimage_1] iwi-aor--- 400.00m                      /dev/sdb1(1)
[O_rmeta_0]  ewi-aor---   4.00m                      /dev/sda1(0)
[O_rmeta_1]  ewi-aor---   4.00m                      /dev/sdb1(0)
P            -wi-a----- 100.00m                      /dev/sda1(101)

[root@host-073 ~]# lvconvert --type cache-pool --cachemode writethrough vg/P
  Rounding up size to full physical extent 8.00 MiB
  Logical volume "P_cmeta" created
  Logical volume "lvol0" created
  Converted vg/P to cache pool.

[root@host-073 ~]# lvs -a -o +devices
LV              Attr       LSize   Pool Origin Cpy%Sync Devices
O               rwi-a-r--- 400.00m               100.00 O_rimage_0(0),O_rimage_1(0)
[O_rimage_0]    iwi-aor--- 400.00m                      /dev/sda1(1)
[O_rimage_1]    iwi-aor--- 400.00m                      /dev/sdb1(1)
[O_rmeta_0]     ewi-aor---   4.00m                      /dev/sda1(0)
[O_rmeta_1]     ewi-aor---   4.00m                      /dev/sdb1(0)
P               Cwi-a-C--- 100.00m                      P_cdata(0)
[P_cdata]       Cwi-a-C--- 100.00m                      /dev/sda1(101)
[P_cmeta]       ewi-a-C---   8.00m                      /dev/sda1(126)
[lvol0_pmspare] ewi-------   8.00m                      /dev/sda1(128)

[root@host-073 ~]# lvconvert --type cache --cachepool vg/P vg/O
  vg/O is now cached.

[root@host-073 ~]# lvs -a -o +devices
LV              Attr       LSize   Pool Origin    Cpy%Sync Devices
O               Cwi-a-C--- 400.00m P    [O_corig]          O_corig(0)
[O_corig]       rwi-aor--- 400.00m                  100.00 O_rimage_0(0),O_rimage_1(0)
[O_rimage_0]    iwi-aor--- 400.00m                         /dev/sda1(1)
[O_rimage_1]    iwi-aor--- 400.00m                         /dev/sdb1(1)
[O_rmeta_0]     ewi-aor---   4.00m                         /dev/sda1(0)
[O_rmeta_1]     ewi-aor---   4.00m                         /dev/sdb1(0)
P               Cwi-a-C--- 100.00m                         P_cdata(0)
[P_cdata]       Cwi-aoC--- 100.00m                         /dev/sda1(101)
[P_cmeta]       ewi-aoC---   8.00m                         /dev/sda1(126)
[lvol0_pmspare] ewi-------   8.00m                         /dev/sda1(128)

Comment 3 Jonathan Earl Brassow 2014-06-16 21:49:45 UTC
corey, yes.

(note that the 'lvcreate' origin followed by 'lvcreate' of cache-pool + cache from comment 0 is broken at the moment and the method given in comment 2 must be used for testing.)

BEFORE PATCH:
[root@bp-01 lvm2]# lvcreate --type raid1 -m 1 -l2 -n lv vg
  Logical volume "lv" created
[root@bp-01 lvm2]# lvcreate --type cache-pool -l 1 -n lv_cachepool vg
  Rounding up size to full physical extent 8.00 MiB
  Logical volume "lvol0" created
  Logical volume "lv_cachepool" created
[root@bp-01 lvm2]# lvconvert --type cache --cachepool lv_cachepool vg/lv
  vg/lv is now cached.
[root@bp-01 lvm2]# devices vg
  LV                   Attr       Cpy%Sync Devices                      
  lv                   Cwi-a-C---          lv_corig(0)                  
  lv_cachepool         Cwi-a-C---          lv_cachepool_cdata(0)        
  [lv_cachepool_cdata] Cwi-aoC---          /dev/sdb1(7)                 
  [lv_cachepool_cmeta] ewi-aoC---          /dev/sdb1(5)                 
  [lv_corig]           rwi-aor---   100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0]        iwi-aor---          /dev/sdb1(1)                 
  [lv_rimage_1]        iwi-aor---          /dev/sdc1(1)                 
  [lv_rmeta_0]         ewi-aor---          /dev/sdb1(0)                 
  [lv_rmeta_1]         ewi-aor---          /dev/sdc1(0)                 
  [lvol0_pmspare]      ewi-------          /dev/sdb1(3)                 

AFTER PATCH:
[root@bp-01 lvm2]# lvcreate --type raid1 -m 1 -l2 -n lv vg
  Logical volume "lv" created
[root@bp-01 lvm2]# lvcreate --type cache-pool -l 1 -n lv_cachepool vg
  Rounding up size to full physical extent 8.00 MiB
  Logical volume "lvol0" created
  Logical volume "lv_cachepool" created
[root@bp-01 lvm2]# lvconvert --type cache --cachepool lv_cachepool vg/lv
  vg/lv is now cached.
[root@bp-01 lvm2]# devices vg
  LV                   Attr       Cpy%Sync Devices                                  
  lv                   Cwi-a-C---          lv_corig(0)                              
  lv_cachepool         Cwi-a-C---          lv_cachepool_cdata(0)                    
  [lv_cachepool_cdata] Cwi-aoC---          /dev/sdb1(7)                             
  [lv_cachepool_cmeta] ewi-aoC---          /dev/sdb1(5)                             
  [lv_corig]           rwi-aor---   100.00 lv_corig_rimage_0(0),lv_corig_rimage_1(0)
  [lv_corig_rimage_0]  iwi-aor---          /dev/sdb1(1)                             
  [lv_corig_rimage_1]  iwi-aor---          /dev/sdc1(1)                             
  [lv_corig_rmeta_0]   ewi-aor---          /dev/sdb1(0)                             
  [lv_corig_rmeta_1]   ewi-aor---          /dev/sdc1(0)                             
  [lvol0_pmspare]      ewi-------          /dev/sdb1(3)

Comment 4 Jonathan Earl Brassow 2014-06-16 23:17:44 UTC
Fix committed upstream:
commit 962a40b98134417f27e89709625ba2ec662204c2
Author: Jonathan Brassow <jbrassow>
Date:   Mon Jun 16 18:15:39 2014 -0500

    cache: Properly rename origin LV tree when adding "_corig"
    
    When creating a cache LV with a RAID origin, we need to ensure that
    the sub-LVs of that origin properly change their names to include
    the "_corig" extention of the top-level LV.  We do this by first
    performing a 'lv_rename_update' before making the call to
    'insert_layer_for_lv'.

Comment 6 Nenad Peric 2014-07-10 09:00:17 UTC
Is there a specific reason why a segment type is called cache-pool and then calling/referring to that pool in lvconvert has to go with cachepool? 

lvcreate --type cache-pool -n lv_cache_name ...

lvconvert --type cache --cachepool lv_cache_name ...

The command does not accept --type cachepool as a recognized segtype. 
Why couldn't both arguments use the same name? 'cachepool' for example?
meaning:

lvcreate --type cachepool -n name ... 
lvconvert --type cache --cachepool name ...

Would be one less thing to worry about when using this capability.

Comment 7 Nenad Peric 2014-07-10 09:02:33 UTC
Anyway, the naming of the sub-LVs went as it was described it should.

[root@virt-015 ~]# lvs -a
  LV            VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv            vg         rwi-a-r--- 200.00m                                    100.00          
  [lv_rimage_0] vg         iwi-aor--- 200.00m                                                    
  [lv_rimage_1] vg         iwi-aor--- 200.00m                                                    
  [lv_rimage_2] vg         iwi-aor--- 200.00m                                                    
  [lv_rimage_3] vg         iwi-aor--- 200.00m                                                    
  [lv_rmeta_0]  vg         ewi-aor---   4.00m                                                    
  [lv_rmeta_1]  vg         ewi-aor---   4.00m                                                    
  [lv_rmeta_2]  vg         ewi-aor---   4.00m                                                    
  [lv_rmeta_3]  vg         ewi-aor---   4.00m                                                    
  lv_root       vg_virt015 -wi-ao----   6.71g                                                    
  lv_swap       vg_virt015 -wi-ao---- 816.00m                                                    
[root@virt-015 ~]# lvcreate --type cachepool -l 5 -n cache_pool vg
  WARNING: Unrecognised segment type cachepool
  Unable to create LV with unknown segment type cachepool.
  Run `lvcreate --help' for more information.
[root@virt-015 ~]# lvcreate --type cache-pool -l 5 -n cache_pool vg
  Rounding up size to full physical extent 8.00 MiB
  Logical volume "lvol0" created
  Logical volume "cache_pool" created
[root@virt-015 ~]# lvs -a
  LV                 VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  cache_pool         vg         Cwi---C---  20.00m                                                    
  [cache_pool_cdata] vg         Cwi---C---  20.00m                                                    
  [cache_pool_cmeta] vg         ewi---C---   8.00m                                                    
  lv                 vg         rwi-a-r--- 200.00m                                    100.00          
  [lv_rimage_0]      vg         iwi-aor--- 200.00m                                                    
  [lv_rimage_1]      vg         iwi-aor--- 200.00m                                                    
  [lv_rimage_2]      vg         iwi-aor--- 200.00m                                                    
  [lv_rimage_3]      vg         iwi-aor--- 200.00m                                                    
  [lv_rmeta_0]       vg         ewi-aor---   4.00m                                                    
  [lv_rmeta_1]       vg         ewi-aor---   4.00m                                                    
  [lv_rmeta_2]       vg         ewi-aor---   4.00m                                                    
  [lv_rmeta_3]       vg         ewi-aor---   4.00m                                                    
  [lvol0_pmspare]    vg         ewi-------   8.00m                                                    
  lv_root            vg_virt015 -wi-ao----   6.71g                                                    
  lv_swap            vg_virt015 -wi-ao---- 816.00m                                                    
[root@virt-015 ~]# lvconvert --type cache --cachepool cache_pool vg/lv
  vg/lv is now cached.
[root@virt-015 ~]# lvs -a
  LV                  VG         Attr       LSize   Pool       Origin     Data%  Meta%  Move Log Cpy%Sync Convert
  cache_pool          vg         Cwi---C---  20.00m                                                              
  [cache_pool_cdata]  vg         Cwi-aoC---  20.00m                                                              
  [cache_pool_cmeta]  vg         ewi-aoC---   8.00m                                                              
  lv                  vg         Cwi-a-C--- 200.00m cache_pool [lv_corig]                                        
  [lv_corig]          vg         rwi-aor--- 200.00m                                              100.00          
  [lv_corig_rimage_0] vg         iwi-aor--- 200.00m                                                              
  [lv_corig_rimage_1] vg         iwi-aor--- 200.00m                                                              
  [lv_corig_rimage_2] vg         iwi-aor--- 200.00m                                                              
  [lv_corig_rimage_3] vg         iwi-aor--- 200.00m                                                              
  [lv_corig_rmeta_0]  vg         ewi-aor---   4.00m                                                              
  [lv_corig_rmeta_1]  vg         ewi-aor---   4.00m                                                              
  [lv_corig_rmeta_2]  vg         ewi-aor---   4.00m                                                              
  [lv_corig_rmeta_3]  vg         ewi-aor---   4.00m                                                              
  [lvol0_pmspare]     vg         ewi-------   8.00m                                                              
  lv_root             vg_virt015 -wi-ao----   6.71g                                                              
  lv_swap             vg_virt015 -wi-ao---- 816.00m                                                              
[root@virt-015 ~]# 


I am marking the corrected behaviour as VERIFIED with:

lvm2-2.02.107-1.el6    BUILT: Mon Jun 23 16:44:45 CEST 2014
lvm2-libs-2.02.107-1.el6    BUILT: Mon Jun 23 16:44:45 CEST 2014
lvm2-cluster-2.02.107-1.el6    BUILT: Mon Jun 23 16:44:45 CEST 2014
udev-147-2.55.el6    BUILT: Wed Jun 18 13:30:21 CEST 2014
device-mapper-1.02.86-1.el6    BUILT: Mon Jun 23 16:44:45 CEST 2014
device-mapper-libs-1.02.86-1.el6    BUILT: Mon Jun 23 16:44:45 CEST 2014
device-mapper-event-1.02.86-1.el6    BUILT: Mon Jun 23 16:44:45 CEST 2014
device-mapper-event-libs-1.02.86-1.el6    BUILT: Mon Jun 23 16:44:45 CEST 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 15:43:06 CEST 2014
cmirror-2.02.107-1.el6    BUILT: Mon Jun 23 16:44:45 CEST 2014

Comment 8 errata-xmlrpc 2014-10-14 08:25:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1387.html


Note You need to log in before you can comment on or make changes to this bug.