Bug 1008012 - cling_by_tags not honored in raid LV configurations
cling_by_tags not honored in raid LV configurations
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.5
x86_64 Linux
unspecified Severity high
: rc
: ---
Assigned To: Jonathan Earl Brassow
Cluster QE
: Regression, TestBlocker
Depends On: 1005190
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-13 15:15 EDT by Corey Marthaler
Modified: 2013-11-21 18:28 EST (History)
11 users (show)

See Also:
Fixed In Version: lvm2-2.02.100-4.el6
Doc Type: Bug Fix
Doc Text:
Bug caused between releases - no doc text required.
Story Points: ---
Clone Of: 1005190
Environment:
Last Closed: 2013-11-21 18:28:32 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 1 Corey Marthaler 2013-09-13 15:26:23 EDT
This exits in rhel6.5 as well, but I think it may have more to do w/ bug 1005434 then cling + tags.

[root@taft-01 ~]# pvs -a -o +devices,pv_tags | grep raid_sanity
 /dev/sdb1  raid_sanity lvm2 a--  67.83g 67.83g                         
 /dev/sdb2  raid_sanity lvm2 a--  67.83g 37.31g /dev/sdb2(0)     C      
 /dev/sdb2  raid_sanity lvm2 a--  67.83g 37.31g                  C      
 /dev/sdc1  raid_sanity lvm2 a--  67.83g 37.30g /dev/sdc1(0)     B      
 /dev/sdc1  raid_sanity lvm2 a--  67.83g 37.30g /dev/sdc1(1)     B      
 /dev/sdc1  raid_sanity lvm2 a--  67.83g 37.30g                  B      
 /dev/sdc2  raid_sanity lvm2 a--  67.83g 67.83g                  A      
 /dev/sde1  raid_sanity lvm2 a--  67.83g  6.78g /dev/sde1(0)     C      
 /dev/sde1  raid_sanity lvm2 a--  67.83g  6.78g /dev/sde1(1)     C      
 /dev/sde1  raid_sanity lvm2 a--  67.83g  6.78g /dev/sde1(7815)  C      
 /dev/sde1  raid_sanity lvm2 a--  67.83g  6.78g                  C      
 /dev/sde2  raid_sanity lvm2 a--  67.83g  6.78g /dev/sde2(0)     A      
 /dev/sde2  raid_sanity lvm2 a--  67.83g  6.78g /dev/sde2(1)     A      
 /dev/sde2  raid_sanity lvm2 a--  67.83g  6.78g                  A      
 /dev/sdf1  raid_sanity lvm2 a--  67.83g 67.83g                  B      
 /dev/sdf2  raid_sanity lvm2 a--  67.83g 67.83g                  A      
 /dev/sdg1  raid_sanity lvm2 a--  67.83g 67.83g                  B      
 /dev/sdg2  raid_sanity lvm2 a--  67.83g 67.83g                  C      

[root@taft-01 ~]# lvs -a -o +devices
 LV                    Attr       LSize   Cpy%Sync Devices
 cling_raid            rwi-a-r--- 122.09g     2.76 cling_raid_rimage_0(0),cling_raid_rimage_1(0),cling_raid_rimage_2(0)
 [cling_raid_rimage_0] Iwi-aor---  61.04g          /dev/sde2(1)
 [cling_raid_rimage_1] Iwi-aor---  61.04g          /dev/sdc1(1)
 [cling_raid_rimage_1] Iwi-aor---  61.04g          /dev/sde1(7815)
 [cling_raid_rimage_2] Iwi-aor---  61.04g          /dev/sde1(1)
 [cling_raid_rimage_2] Iwi-aor---  61.04g          /dev/sdb2(0)
 [cling_raid_rmeta_0]  ewi-aor---   4.00m          /dev/sde2(0)
 [cling_raid_rmeta_1]  ewi-aor---   4.00m          /dev/sdc1(0)
 [cling_raid_rmeta_2]  ewi-aor---   4.00m          /dev/sde1(0)
Comment 2 Jonathan Earl Brassow 2013-09-24 22:36:38 EDT
Fix committed upstream (2 commits):
commit c37c59e155813545c2e674eb370a4609e97aa769
Author: Jonathan Brassow <jbrassow@redhat.com>
Date:   Tue Sep 24 21:32:53 2013 -0500

    Test/clean-up: Indent clean-up and additional RAID resize test
    
    Better indenting and a test for bug 1005434 (parity RAID should
    extend in a contiguous fashion).

commit 5ded7314ae00629da8d21d925c3fa091cce2a939
Author: Jonathan Brassow <jbrassow@redhat.com>
Date:   Tue Sep 24 21:32:10 2013 -0500

    RAID: Fix broken allocation policies for parity RAID types
    
    A previous commit (b6bfddcd0a830d0c9312bc3ab906cb3d1b7a6dd9) which
    was designed to prevent segfaults during lvextend when trying to
    extend striped logical volumes forgot to include calculations for
    RAID4/5/6 parity devices.  This was causing the 'contiguous' and
    'cling_by_tags' allocation policies to fail for RAID 4/5/6.
    
    The solution is to remember that while we can compare
        ah->area_count == prev_lvseg->area_count
    for non-RAID, we should compare
        (ah->area_count + ah->parity_count) == prev_lvseg->area_count
    for a general solution.
Comment 5 Corey Marthaler 2013-09-30 09:20:02 EDT
Fix verified in the latest official rpms.

2.6.32-410.el6.x86_64
lvm2-2.02.100-4.el6    BUILT: Fri Sep 27 09:05:32 CDT 2013
lvm2-libs-2.02.100-4.el6    BUILT: Fri Sep 27 09:05:32 CDT 2013
lvm2-cluster-2.02.100-4.el6    BUILT: Fri Sep 27 09:05:32 CDT 2013
udev-147-2.48.el6    BUILT: Fri Aug  9 06:09:50 CDT 2013
device-mapper-1.02.79-4.el6    BUILT: Fri Sep 27 09:05:32 CDT 2013
device-mapper-libs-1.02.79-4.el6    BUILT: Fri Sep 27 09:05:32 CDT 2013
device-mapper-event-1.02.79-4.el6    BUILT: Fri Sep 27 09:05:32 CDT 2013
device-mapper-event-libs-1.02.79-4.el6    BUILT: Fri Sep 27 09:05:32 CDT 2013
cmirror-2.02.100-4.el6    BUILT: Fri Sep 27 09:05:32 CDT 2013


SCENARIO (raid4) - [cling_extend_avail_tagged_extents]
Verify that mirror extends honor the cling by tags allocation policy when
there are enough PVs with tags present for extension to work
Add tags to random PVs
A's /dev/sdf2 /dev/sdb2 /dev/sdd1
B's /dev/sdf1 /dev/sde1 /dev/sdb1
C's /dev/sde2 /dev/sdc2 /dev/sdc1
Create a raid using the tagged PVs
taft-01: lvcreate --type raid4 -i 2 -n cling_raid -l 15627 raid_sanity /dev/sdf2 /dev/sdf1 /dev/sde2
Extend using the cling_by_tags policy:
taft-01: lvextend -l 31254 --alloc cling_by_tags raid_sanity/cling_raid
Verify rimage_0 is made up of the proper "A" tagged devices
Verify rimage_1 is made up of the proper "B" tagged devices
Verify rimage_2 is made up of the proper "C" tagged devices


Deactivating raid cling_raid... and removing
Comment 6 errata-xmlrpc 2013-11-21 18:28:32 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1704.html

Note You need to log in before you can comment on or make changes to this bug.