Bug 1217605

Summary: [RFE] make contiguous allocation with cling tags work for raid10 volumes
Product: [Community] LVM and device-mapper Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: LVM and device-mapper development team <lvm-team>
lvm2 sub component: Mirroring and RAID QA Contact: cluster-qe <cluster-qe>
Status: CLOSED NOTABUG Docs Contact:
Severity: medium    
Priority: unspecified CC: agk, heinzm, jbrassow, msnitzer, prajnoha, zkabelac
Version: unspecifiedKeywords: FutureFeature
Target Milestone: ---Flags: pm-rhel: lvm-technical-solution?
pm-rhel: lvm-test-coverage?
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-03-02 22:31:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2015-04-30 19:12:54 UTC
Description of problem:
This is the left over issue from bug 983600.

[root@host-082 ~]# pvs -a -o +devices,pv_tags | grep raid_sanity
  /dev/sda1   raid_sanity lvm2 a--  548.00m 548.00m   A      
  /dev/sda2   raid_sanity lvm2 a--  548.00m 548.00m   A      
  /dev/sdc1   raid_sanity lvm2 a--  548.00m 548.00m   A      
  /dev/sdc2   raid_sanity lvm2 a--  548.00m 548.00m   B      
  /dev/sdd1   raid_sanity lvm2 a--  548.00m 548.00m   B      
  /dev/sdd2   raid_sanity lvm2 a--  548.00m 548.00m   B      
  /dev/sde1   raid_sanity lvm2 a--  548.00m 548.00m   A      
  /dev/sde2   raid_sanity lvm2 a--  548.00m 548.00m   B      

# cling allocation doesn't work
[root@host-082 ~]# lvcreate  --alloc cling_by_tags --type raid10 -i 2 -n cling_raid -L 600M raid_sanity
  Using default stripesize 64.00 KiB.
  Insufficient suitable allocatable extents for logical volume cling_raid: 304 more required

# default allocation does 
[root@host-082 ~]# lvcreate  --type raid10 -i 2 -n cling_raid -L 600M raid_sanity
  Using default stripesize 64.00 KiB.
  Logical volume "cling_raid" created.

[root@host-082 ~]# lvs -a -o +devices
  LV                    Attr       LSize   Cpy%Sync Devices
  cling_raid            rwi-a-r--- 600.00m 100.00   cling_raid_rimage_0(0),cling_raid_rimage_1(0),cling_raid_rimage_2(0),cling_raid_rimage_3(0)
  [cling_raid_rimage_0] iwi-aor--- 300.00m          /dev/sda1(1)
  [cling_raid_rimage_1] iwi-aor--- 300.00m          /dev/sda2(1)
  [cling_raid_rimage_2] iwi-aor--- 300.00m          /dev/sdc1(1)
  [cling_raid_rimage_3] iwi-aor--- 300.00m          /dev/sdc2(1)
  [cling_raid_rmeta_0]  ewi-aor---   4.00m          /dev/sda1(0)
  [cling_raid_rmeta_1]  ewi-aor---   4.00m          /dev/sda2(0)
  [cling_raid_rmeta_2]  ewi-aor---   4.00m          /dev/sdc1(0)
  [cling_raid_rmeta_3]  ewi-aor---   4.00m          /dev/sdc2(0)


Version-Release number of selected component (if applicable):
2.6.32-554.el6.x86_64

lvm2-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
lvm2-libs-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
lvm2-cluster-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
udev-147-2.61.el6    BUILT: Mon Mar  2 05:08:11 CST 2015
device-mapper-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-libs-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-event-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-event-libs-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 08:43:06 CDT 2014
cmirror-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015

Comment 3 Corey Marthaler 2020-03-02 22:31:25 UTC
I'm closing this since cling allocation tags do technically work with 4 tags instead of just 2. If having just 2 tags ever becomes a restriction for some raid10 customer, then we can reopen this.

[root@hayes-02 ~]# pvs -a -o +devices,pv_tags
  PV         VG          Fmt  Attr PSize   PFree   Devices PV Tags
  /dev/sdb1  raid_sanity lvm2 a--  548.00m 548.00m         A      
  /dev/sdc1  raid_sanity lvm2 a--  548.00m 548.00m         A      
  /dev/sde1  raid_sanity lvm2 a--  548.00m 548.00m         C      
  /dev/sdf1  raid_sanity lvm2 a--  548.00m 548.00m         B      
  /dev/sdh1  raid_sanity lvm2 a--  548.00m 548.00m         B      
  /dev/sdi1  raid_sanity lvm2 a--  548.00m 548.00m         D      

[root@hayes-02 ~]# lvcreate  --alloc cling_by_tags --type raid10 -i 2 -n cling_raid -L 600M raid_sanity
  Using default stripesize 64.00 KiB.
  Logical volume "cling_raid" created.

[root@hayes-02 ~]# lvs -a -o +devices
  LV                    VG          Attr       LSize   Cpy%Sync Devices
  cling_raid            raid_sanity rwl-a-r--- 600.00m 62.60    cling_raid_rimage_0(0),cling_raid_rimage_1(0),cling_raid_rimage_2(0),cling_raid_rimage_3(0)
  [cling_raid_rimage_0] raid_sanity Iwl-aor--- 300.00m          /dev/sdb1(1)
  [cling_raid_rimage_1] raid_sanity Iwl-aor--- 300.00m          /dev/sde1(1)
  [cling_raid_rimage_2] raid_sanity Iwl-aor--- 300.00m          /dev/sdf1(1)
  [cling_raid_rimage_3] raid_sanity Iwl-aor--- 300.00m          /dev/sdi1(1)
  [cling_raid_rmeta_0]  raid_sanity ewl-aor---   4.00m          /dev/sdb1(0)
  [cling_raid_rmeta_1]  raid_sanity ewl-aor---   4.00m          /dev/sde1(0)
  [cling_raid_rmeta_2]  raid_sanity ewl-aor---   4.00m          /dev/sdf1(0)
  [cling_raid_rmeta_3]  raid_sanity ewl-aor---   4.00m          /dev/sdi1(0)


lvm2-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
lvm2-libs-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020