Bug 1005190 - cling_by_tags not honored in raid LV configurations
cling_by_tags not honored in raid LV configurations
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
7.0
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Jonathan Earl Brassow
Cluster QE
:
Depends On:
Blocks: 1008012
  Show dependency treegraph
 
Reported: 2013-09-06 07:51 EDT by Nenad Peric
Modified: 2014-06-17 21:19 EDT (History)
9 users (show)

See Also:
Fixed In Version: lvm2-2.02.103-1.el7
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1008012 (view as bug list)
Environment:
Last Closed: 2014-06-13 09:28:14 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Verbose output of lvextend (155.47 KB, text/x-log)
2013-09-06 07:55 EDT, Nenad Peric
no flags Details

  None (edit)
Description Nenad Peric 2013-09-06 07:51:57 EDT
Description of problem:

When trying to extend a raid4 LV which sits on tagged PVs, using cling_by_tags allocation policy the extension takes the disks with wrong tags (not the ones used in the LV). There was enough space (enough free PEs) on the drives which were tagged. 

Version-Release number of selected component (if applicable):

lvm2-2.02.101-0.145.el7.x86_64


How reproducible:

Everytime

Steps to reproduce:

vgcreate raid_sanity /dev/sd{a..j}1

pvchange --addtag A /dev/sd{a..c}1
pvchange --addtag B /dev/sd{d..f}1
pvchange --addtag C /dev/sd{g..i}1

Create LV on 'B' tagged PVs:

lvcreate --type raid4 -i 2 -n cling_raid --alloc cling_by_tags -l 2763 raid_sanity /dev/sdd1 /dev/sde1 /dev/sdf1

Check to see where the PEs are taken from:
pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count

Extend the LV

lvextend -l 3000 --alloc cling_by_tags raid_sanity/cling_raid

Check which PVs were taken:

pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count


Actual results:

[root@virt-129 ~]# lvextend -l 3000 --alloc cling_by_tags raid_sanity/cling_raid
  Using stripesize of last segment 64.00 KiB
  Extending logical volume cling_raid to 11.72 GiB
  Logical volume cling_raid successfully resized
[root@virt-129 ~]# pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count
  PV         PV Tags Alloc PE  
  /dev/sda1  A         118 3070
  /dev/sdb1  A         118 3070
  /dev/sdc1  A         118 3070
  /dev/sdd1  B        1383 3070
  /dev/sde1  B        1383 3070
  /dev/sdf1  B        1383 3070
  /dev/sdg1  C           0 3070
  /dev/sdh1  C           0 3070
  /dev/sdi1  C           0 3070
  /dev/sdj1              0 3070
  /dev/vda2           1922 1922


It takes PEs from 'A' tagged PVs, even though there is quite enough PEs left on 'B' PVs. 

Expected results:

It should use only PVs which were already in the LV tagged (ie. 'B' since the last segment of the existing LV is on that one.)

Additional information:

cling_tag_list is not defined in lvm.conf (it is commented out)
Comment 1 Nenad Peric 2013-09-06 07:55:45 EDT
Created attachment 794673 [details]
Verbose output of lvextend
Comment 3 Jonathan Earl Brassow 2013-09-24 22:37:15 EDT
Fix committed upstream (2 commits):
commit c37c59e155813545c2e674eb370a4609e97aa769
Author: Jonathan Brassow <jbrassow@redhat.com>
Date:   Tue Sep 24 21:32:53 2013 -0500

    Test/clean-up: Indent clean-up and additional RAID resize test
    
    Better indenting and a test for bug 1005434 (parity RAID should
    extend in a contiguous fashion).

commit 5ded7314ae00629da8d21d925c3fa091cce2a939
Author: Jonathan Brassow <jbrassow@redhat.com>
Date:   Tue Sep 24 21:32:10 2013 -0500

    RAID: Fix broken allocation policies for parity RAID types
    
    A previous commit (b6bfddcd0a830d0c9312bc3ab906cb3d1b7a6dd9) which
    was designed to prevent segfaults during lvextend when trying to
    extend striped logical volumes forgot to include calculations for
    RAID4/5/6 parity devices.  This was causing the 'contiguous' and
    'cling_by_tags' allocation policies to fail for RAID 4/5/6.
    
    The solution is to remember that while we can compare
        ah->area_count == prev_lvseg->area_count
    for non-RAID, we should compare
        (ah->area_count + ah->parity_count) == prev_lvseg->area_count
    for a general solution.
Comment 5 Corey Marthaler 2014-03-26 19:03:24 EDT
Fix verified in the latest rpms.


3.10.0-113.el7.x86_64
lvm2-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-libs-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-cluster-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-persistent-data-0.2.8-5.el7    BUILT: Fri Feb 28 19:15:56 CST 2014
cmirror-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014



[root@host-049 ~]# vgcreate raid_sanity /dev/sd{a..h}1
  Volume group "raid_sanity" successfully created
[root@host-049 ~]# pvscan
  PV /dev/sda1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdb1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdc1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdd1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sde1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdf1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdg1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdh1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
[root@host-049 ~]# pvchange --addtag A /dev/sd{a..c}1
  Physical volume "/dev/sda1" changed
  Physical volume "/dev/sdb1" changed
  Physical volume "/dev/sdc1" changed
  3 physical volumes changed / 0 physical volumes not changed
[root@host-049 ~]# pvchange --addtag B /dev/sd{d..f}1
  Physical volume "/dev/sdd1" changed
  Physical volume "/dev/sde1" changed
  Physical volume "/dev/sdf1" changed
  3 physical volumes changed / 0 physical volumes not changed
[root@host-049 ~]# pvchange --addtag C /dev/sd{g..h}1
  Physical volume "/dev/sdg1" changed
  Physical volume "/dev/sdh1" changed
  2 physical volumes changed / 0 physical volumes not changed
[root@host-049 ~]# pvs -a -o +pv_tags
  PV            VG            Fmt  Attr PSize PFree PV Tags
  /dev/sda1     raid_sanity   lvm2 a--  9.99g 9.99g A      
  /dev/sdb1     raid_sanity   lvm2 a--  9.99g 9.99g A      
  /dev/sdc1     raid_sanity   lvm2 a--  9.99g 9.99g A      
  /dev/sdd1     raid_sanity   lvm2 a--  9.99g 9.99g B      
  /dev/sde1     raid_sanity   lvm2 a--  9.99g 9.99g B      
  /dev/sdf1     raid_sanity   lvm2 a--  9.99g 9.99g B      
  /dev/sdg1     raid_sanity   lvm2 a--  9.99g 9.99g C      
  /dev/sdh1     raid_sanity   lvm2 a--  9.99g 9.99g C      
[root@host-049 ~]# lvcreate --type raid4 -i 2 -n cling_raid --alloc cling_by_tags -l 2763 raid_sanity /dev/sdd1 /dev/sde1 /dev/sdf1
  Using default stripesize 64.00 KiB
  Rounding size (2763 extents) up to stripe boundary size (2764 extents).
  Logical volume "cling_raid" created
[root@host-049 ~]# pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count
  PV         PV Tags Alloc PE  
  /dev/sda1  A           0 2558
  /dev/sdb1  A           0 2558
  /dev/sdc1  A           0 2558
  /dev/sdd1  B        1383 2558
  /dev/sde1  B        1383 2558
  /dev/sdf1  B        1383 2558
  /dev/sdg1  C           0 2558
  /dev/sdh1  C           0 2558
[root@host-049 ~]# lvextend -l 3000 --alloc cling_by_tags raid_sanity/cling_raid
  Using stripesize of last segment 64.00 KiB
  Extending logical volume cling_raid to 11.72 GiB
  Logical volume cling_raid successfully resized
[root@host-049 ~]# pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count
  PV         PV Tags Alloc PE  
  /dev/sda1  A           0 2558
  /dev/sdb1  A           0 2558
  /dev/sdc1  A           0 2558
  /dev/sdd1  B        1501 2558
  /dev/sde1  B        1501 2558
  /dev/sdf1  B        1501 2558
  /dev/sdg1  C           0 2558
  /dev/sdh1  C           0 2558
Comment 6 Ludek Smid 2014-06-13 09:28:14 EDT
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.

Note You need to log in before you can comment on or make changes to this bug.