RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1005190 - cling_by_tags not honored in raid LV configurations
Summary: cling_by_tags not honored in raid LV configurations
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1008012
TreeView+ depends on / blocked
 
Reported: 2013-09-06 11:51 UTC by Nenad Peric
Modified: 2021-09-08 18:55 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.103-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1008012 (view as bug list)
Environment:
Last Closed: 2014-06-13 13:28:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Verbose output of lvextend (155.47 KB, text/x-log)
2013-09-06 11:55 UTC, Nenad Peric
no flags Details

Description Nenad Peric 2013-09-06 11:51:57 UTC
Description of problem:

When trying to extend a raid4 LV which sits on tagged PVs, using cling_by_tags allocation policy the extension takes the disks with wrong tags (not the ones used in the LV). There was enough space (enough free PEs) on the drives which were tagged. 

Version-Release number of selected component (if applicable):

lvm2-2.02.101-0.145.el7.x86_64


How reproducible:

Everytime

Steps to reproduce:

vgcreate raid_sanity /dev/sd{a..j}1

pvchange --addtag A /dev/sd{a..c}1
pvchange --addtag B /dev/sd{d..f}1
pvchange --addtag C /dev/sd{g..i}1

Create LV on 'B' tagged PVs:

lvcreate --type raid4 -i 2 -n cling_raid --alloc cling_by_tags -l 2763 raid_sanity /dev/sdd1 /dev/sde1 /dev/sdf1

Check to see where the PEs are taken from:
pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count

Extend the LV

lvextend -l 3000 --alloc cling_by_tags raid_sanity/cling_raid

Check which PVs were taken:

pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count


Actual results:

[root@virt-129 ~]# lvextend -l 3000 --alloc cling_by_tags raid_sanity/cling_raid
  Using stripesize of last segment 64.00 KiB
  Extending logical volume cling_raid to 11.72 GiB
  Logical volume cling_raid successfully resized
[root@virt-129 ~]# pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count
  PV         PV Tags Alloc PE  
  /dev/sda1  A         118 3070
  /dev/sdb1  A         118 3070
  /dev/sdc1  A         118 3070
  /dev/sdd1  B        1383 3070
  /dev/sde1  B        1383 3070
  /dev/sdf1  B        1383 3070
  /dev/sdg1  C           0 3070
  /dev/sdh1  C           0 3070
  /dev/sdi1  C           0 3070
  /dev/sdj1              0 3070
  /dev/vda2           1922 1922


It takes PEs from 'A' tagged PVs, even though there is quite enough PEs left on 'B' PVs. 

Expected results:

It should use only PVs which were already in the LV tagged (ie. 'B' since the last segment of the existing LV is on that one.)

Additional information:

cling_tag_list is not defined in lvm.conf (it is commented out)

Comment 1 Nenad Peric 2013-09-06 11:55:45 UTC
Created attachment 794673 [details]
Verbose output of lvextend

Comment 3 Jonathan Earl Brassow 2013-09-25 02:37:15 UTC
Fix committed upstream (2 commits):
commit c37c59e155813545c2e674eb370a4609e97aa769
Author: Jonathan Brassow <jbrassow>
Date:   Tue Sep 24 21:32:53 2013 -0500

    Test/clean-up: Indent clean-up and additional RAID resize test
    
    Better indenting and a test for bug 1005434 (parity RAID should
    extend in a contiguous fashion).

commit 5ded7314ae00629da8d21d925c3fa091cce2a939
Author: Jonathan Brassow <jbrassow>
Date:   Tue Sep 24 21:32:10 2013 -0500

    RAID: Fix broken allocation policies for parity RAID types
    
    A previous commit (b6bfddcd0a830d0c9312bc3ab906cb3d1b7a6dd9) which
    was designed to prevent segfaults during lvextend when trying to
    extend striped logical volumes forgot to include calculations for
    RAID4/5/6 parity devices.  This was causing the 'contiguous' and
    'cling_by_tags' allocation policies to fail for RAID 4/5/6.
    
    The solution is to remember that while we can compare
        ah->area_count == prev_lvseg->area_count
    for non-RAID, we should compare
        (ah->area_count + ah->parity_count) == prev_lvseg->area_count
    for a general solution.

Comment 5 Corey Marthaler 2014-03-26 23:03:24 UTC
Fix verified in the latest rpms.


3.10.0-113.el7.x86_64
lvm2-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-libs-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-cluster-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-persistent-data-0.2.8-5.el7    BUILT: Fri Feb 28 19:15:56 CST 2014
cmirror-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014



[root@host-049 ~]# vgcreate raid_sanity /dev/sd{a..h}1
  Volume group "raid_sanity" successfully created
[root@host-049 ~]# pvscan
  PV /dev/sda1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdb1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdc1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdd1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sde1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdf1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdg1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
  PV /dev/sdh1   VG raid_sanity     lvm2 [9.99 GiB / 9.99 GiB free]
[root@host-049 ~]# pvchange --addtag A /dev/sd{a..c}1
  Physical volume "/dev/sda1" changed
  Physical volume "/dev/sdb1" changed
  Physical volume "/dev/sdc1" changed
  3 physical volumes changed / 0 physical volumes not changed
[root@host-049 ~]# pvchange --addtag B /dev/sd{d..f}1
  Physical volume "/dev/sdd1" changed
  Physical volume "/dev/sde1" changed
  Physical volume "/dev/sdf1" changed
  3 physical volumes changed / 0 physical volumes not changed
[root@host-049 ~]# pvchange --addtag C /dev/sd{g..h}1
  Physical volume "/dev/sdg1" changed
  Physical volume "/dev/sdh1" changed
  2 physical volumes changed / 0 physical volumes not changed
[root@host-049 ~]# pvs -a -o +pv_tags
  PV            VG            Fmt  Attr PSize PFree PV Tags
  /dev/sda1     raid_sanity   lvm2 a--  9.99g 9.99g A      
  /dev/sdb1     raid_sanity   lvm2 a--  9.99g 9.99g A      
  /dev/sdc1     raid_sanity   lvm2 a--  9.99g 9.99g A      
  /dev/sdd1     raid_sanity   lvm2 a--  9.99g 9.99g B      
  /dev/sde1     raid_sanity   lvm2 a--  9.99g 9.99g B      
  /dev/sdf1     raid_sanity   lvm2 a--  9.99g 9.99g B      
  /dev/sdg1     raid_sanity   lvm2 a--  9.99g 9.99g C      
  /dev/sdh1     raid_sanity   lvm2 a--  9.99g 9.99g C      
[root@host-049 ~]# lvcreate --type raid4 -i 2 -n cling_raid --alloc cling_by_tags -l 2763 raid_sanity /dev/sdd1 /dev/sde1 /dev/sdf1
  Using default stripesize 64.00 KiB
  Rounding size (2763 extents) up to stripe boundary size (2764 extents).
  Logical volume "cling_raid" created
[root@host-049 ~]# pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count
  PV         PV Tags Alloc PE  
  /dev/sda1  A           0 2558
  /dev/sdb1  A           0 2558
  /dev/sdc1  A           0 2558
  /dev/sdd1  B        1383 2558
  /dev/sde1  B        1383 2558
  /dev/sdf1  B        1383 2558
  /dev/sdg1  C           0 2558
  /dev/sdh1  C           0 2558
[root@host-049 ~]# lvextend -l 3000 --alloc cling_by_tags raid_sanity/cling_raid
  Using stripesize of last segment 64.00 KiB
  Extending logical volume cling_raid to 11.72 GiB
  Logical volume cling_raid successfully resized
[root@host-049 ~]# pvs -o pv_name,pv_tags,pv_pe_alloc_count,pv_pe_count
  PV         PV Tags Alloc PE  
  /dev/sda1  A           0 2558
  /dev/sdb1  A           0 2558
  /dev/sdc1  A           0 2558
  /dev/sdd1  B        1501 2558
  /dev/sde1  B        1501 2558
  /dev/sdf1  B        1501 2558
  /dev/sdg1  C           0 2558
  /dev/sdh1  C           0 2558

Comment 6 Ludek Smid 2014-06-13 13:28:14 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.