Bug 1158172 - lvconvert splits allocation unnecessarily for legacy "mirror" segment type - take 2
Summary: lvconvert splits allocation unnecessarily for legacy "mirror" segment type -...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Alasdair Kergon
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-28 19:11 UTC by Corey Marthaler
Modified: 2016-11-04 04:08 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.164-4.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-04 04:08:13 UTC
Target Upstream Version:


Attachments (Terms of Use)
-vvvv of the lvconvert to mirror segment type (199.42 KB, text/plain)
2014-10-29 17:01 UTC, Corey Marthaler
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1445 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-11-03 13:46:41 UTC

Description Corey Marthaler 2014-10-28 19:11:03 UTC
Description of problem:
This appears to be a regression of bug 204136.

./mirror_sanity

SCENARIO - [check_proper_lvconvert_allocation]

Create a linear spanning multiple devices allocated in the middle of two PVs and convert to mirror
Recreating PVs/VG with smaller sizes
host-110.virt.lab.msp.redhat.com: pvcreate --setphysicalvolumesize 1G /dev/sdb1 /dev/sde1 /dev/sdc1 /dev/sda1 /dev/sde2 /dev/sdd2 /dev/sdb2 /dev/sdc2 /dev/sda2 /dev/sdd1
host-110.virt.lab.msp.redhat.com: vgcreate mirror_sanity /dev/sdb1 /dev/sde1 /dev/sdc1 /dev/sda1 /dev/sde2 /dev/sdd2 /dev/sdb2 /dev/sdc2 /dev/sda2 /dev/sdd1

create spacer linears on /dev/sdb1 and /dev/sde1
host-110.virt.lab.msp.redhat.com: lvcreate -L 400m mirror_sanity /dev/sdb1
host-110.virt.lab.msp.redhat.com: lvcreate -L 400m mirror_sanity /dev/sde1
create linear spanning both /dev/sdb1 and /dev/sde1
host-110.virt.lab.msp.redhat.com: lvcreate -n span -L 400m mirror_sanity /dev/sdb1:0-150 /dev/sde1:0-150

remove spacer linears on /dev/sdb1 and /dev/sde1
host-110.virt.lab.msp.redhat.com: lvremove -f mirror_sanity/lvol0
host-110.virt.lab.msp.redhat.com: lvremove -f mirror_sanity/lvol1

up convert to mirror and check device allocation

# There should be plenty of space here to allocate the additional leg on either sdb1 or sde1, or both but it fails to.
[root@host-110 ~]# lvconvert -m1 --type mirror mirror_sanity/span --alloc cling
  Insufficient suitable allocatable extents for logical volume : 49 more required
  Unable to allocate extents for mirror(s).


# Now, w/o the '--alloc cling', it does exactly what I'd expect the '--alloc cling' *to do*, and also appears to be a regression of bug 204136, which should have used a new PV to allocate the entire new leg on.
[root@host-110 ~]# lvconvert -m1 --type mirror mirror_sanity/span
  mirror_sanity/span: Converted: 1.0%
  mirror_sanity/span: Converted: 100.0%
[root@host-110 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize   Log       Cpy%Sync Devices
  span            mirror_sanity mwi-a-m--- 400.00m span_mlog 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                    /dev/sdb1(100)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                    /dev/sde1(100)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                    /dev/sde1(149)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                    /dev/sdb1(151)
  [span_mlog]     mirror_sanity lwi-aom---   4.00m                    /dev/sdd1(0)


Version-Release number of selected component (if applicable):
3.10.0-189.el7.x86_64

lvm2-2.02.111-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
lvm2-libs-2.02.111-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
lvm2-cluster-2.02.111-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-1.02.90-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-libs-1.02.90-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-event-1.02.90-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-event-libs-1.02.90-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-persistent-data-0.3.2-1.el7    BUILT: Thu Apr  3 09:58:51 CDT 2014
cmirror-2.02.111-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014

Comment 2 Corey Marthaler 2014-10-29 16:54:57 UTC
As suspected, this does work properly with a raid1 conversion.

 [root@host-110 ~]# lvs -a -o +devices
  LV   VG            Attr       LSize    Devices       
  span raid_sanity   -wi-a----- 400.00m  /dev/sde2(100)
  span raid_sanity   -wi-a----- 400.00m  /dev/sde1(100)

[root@host-110 ~]# lvconvert -m1 --type raid1 raid_sanity/span

[root@host-110 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize   Cpy%Sync Devices                          
  span            raid_sanity   rwi-a-r--- 400.00m 56.00    span_rimage_0(0),span_rimage_1(0)
  [span_rimage_0] raid_sanity   Iwi-aor--- 400.00m          /dev/sde2(100)
  [span_rimage_0] raid_sanity   Iwi-aor--- 400.00m          /dev/sde1(100)
  [span_rimage_1] raid_sanity   Iwi-aor--- 400.00m          /dev/sda2(1)
  [span_rmeta_0]  raid_sanity   ewi-aor---   4.00m          /dev/sde2(0)
  [span_rmeta_1]  raid_sanity   ewi-aor---   4.00m          /dev/sda2(0)

Comment 3 Corey Marthaler 2014-10-29 17:01:22 UTC
Created attachment 951850 [details]
-vvvv of the lvconvert to mirror segment type

Comment 9 Alasdair Kergon 2015-09-01 18:49:46 UTC
The attached trace looks very concerning - but then it's from quite an old version and I thought some of this code got patched.

Let's see how up-to-date code behaves.

Comment 10 Alasdair Kergon 2015-09-01 19:37:30 UTC
Please revisit this one.  It seems to be behaving differently from that trace for me with the current builds.  (You'll probably need to use cling_by_tags to get the code recognise that partitions are on the same underlying device.  That's probably enough of an unusual setup to not be worth detecting automatically.)

Comment 11 Corey Marthaler 2015-09-02 20:35:50 UTC
This is a pretty unusual setup so feel free to close wontfix. However, it appears the same behavior remains in the latest rpms, even w/ cling_by_tags.

[root@host-109 ~]# pvcreate --setphysicalvolumesize 1G /dev/sdd1 /dev/sdb1 /dev/sdd2 /dev/sda2 /dev/sda1 /dev/sdc1 /dev/sdg1 /dev/sdg2 /dev/sdb2 /dev/sdc2
  Physical volume "/dev/sdd1" successfully created
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdd2" successfully created
  Physical volume "/dev/sda2" successfully created
  Physical volume "/dev/sda1" successfully created
  Physical volume "/dev/sdc1" successfully created
  Physical volume "/dev/sdg1" successfully created
  Physical volume "/dev/sdg2" successfully created
  Physical volume "/dev/sdb2" successfully created
  Physical volume "/dev/sdc2" successfully created
[root@host-109 ~]# vgcreate mirror_sanity /dev/sdd1 /dev/sdb1 /dev/sdd2 /dev/sda2 /dev/sda1 /dev/sdc1 /dev/sdg1 /dev/sdg2 /dev/sdb2 /dev/sdc2
  Volume group "mirror_sanity" successfully created

# create spacer linears on /dev/sdd1 and /dev/sdb1
[root@host-109 ~]# lvcreate -L 400m mirror_sanity /dev/sdd1
  Logical volume "lvol0" created.
[root@host-109 ~]# lvcreate -L 400m mirror_sanity /dev/sdb1
  Logical volume "lvol1" created.
[root@host-109 ~]# lvs -a -o +devices
  LV    VG            Attr       LSize    Log Cpy%Sync Devices
  lvol0 mirror_sanity -wi-a----- 400.00m               /dev/sdd1(0)
  lvol1 mirror_sanity -wi-a----- 400.00m               /dev/sdb1(0)

# create linear spanning both /dev/sdd1 and /dev/sdb1
[root@host-109 ~]# lvcreate -n span -L 400m mirror_sanity /dev/sdd1:0-150 /dev/sdb1:0-150
  Logical volume "span" created.
[root@host-109 ~]# lvs -a -o +devices
  LV    VG            Attr       LSize    Log Cpy%Sync Devices
  lvol0 mirror_sanity -wi-a----- 400.00m               /dev/sdd1(0)
  lvol1 mirror_sanity -wi-a----- 400.00m               /dev/sdb1(0)
  span  mirror_sanity -wi-a----- 400.00m               /dev/sdd1(100)
  span  mirror_sanity -wi-a----- 400.00m               /dev/sdb1(100)

# remove spacer linears on /dev/sdd1 and /dev/sdb1
[root@host-109 ~]# lvremove -f mirror_sanity/lvol0
  Logical volume "lvol0" successfully removed
[root@host-109 ~]# lvremove -f mirror_sanity/lvol1
  Logical volume "lvol1" successfully removed
[root@host-109 ~]# lvs -a -o +devices
  LV   VG            Attr       LSize    Log Cpy%Sync Devices
  span mirror_sanity -wi-a----- 400.00m               /dev/sdd1(100)
  span mirror_sanity -wi-a----- 400.00m               /dev/sdb1(100)

# up convert to mirror and check device allocation

[root@host-109 ~]# lvconvert --alloc cling_by_tags -m1 --type mirror mirror_sanity/span
  Insufficient suitable allocatable extents for logical volume : 49 more required
  Unable to allocate extents for mirror(s).

[root@host-109 ~]# lvconvert --alloc cling -m1 --type mirror mirror_sanity/span
  Insufficient suitable allocatable extents for logical volume : 49 more required
  Unable to allocate extents for mirror(s).


# Now attempt by adding in actual tags...
[root@host-109 ~]# grep cling_tag_list /etc/lvm/lvm.conf
    cling_tag_list = [ "@A", "@B" ]

[root@host-109 ~]# pvchange --addtag A /dev/sdd1
  Physical volume "/dev/sdd1" changed
  1 physical volume changed / 0 physical volumes not changed
[root@host-109 ~]# pvchange --addtag A /dev/sdb1
  Physical volume "/dev/sdb1" changed
  1 physical volume changed / 0 physical volumes not changed
[root@host-109 ~]# pvchange --addtag B /dev/sdc1
  Physical volume "/dev/sdc1" changed

[root@host-109 ~]# pvs -a -o +pv_tags
  PV            VG            Fmt  Attr PSize    PFree    PV Tags
  /dev/sda1     mirror_sanity lvm2 a--  1020.00m 1020.00m
  /dev/sda2     mirror_sanity lvm2 a--  1020.00m 1020.00m
  /dev/sdb1     mirror_sanity lvm2 a--  1020.00m  824.00m A
  /dev/sdb2     mirror_sanity lvm2 a--  1020.00m 1020.00m
  /dev/sdc1     mirror_sanity lvm2 a--  1020.00m 1020.00m B
  /dev/sdc2     mirror_sanity lvm2 a--  1020.00m 1020.00m
  /dev/sdd1     mirror_sanity lvm2 a--  1020.00m  816.00m A
  /dev/sdd2     mirror_sanity lvm2 a--  1020.00m 1020.00m
  /dev/sdg1     mirror_sanity lvm2 a--  1020.00m 1020.00m
  /dev/sdg2     mirror_sanity lvm2 a--  1020.00m 1020.00m

[root@host-109 ~]# lvconvert --alloc cling_by_tags -m1 --type mirror mirror_sanity/span
[root@host-109 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Log       Cpy%Sync Devices
  span            mirror_sanity mwi-a-m--- 400.00m  span_mlog 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                     /dev/sdd1(100)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                     /dev/sdb1(100)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                     /dev/sdb1(149)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                     /dev/sdd1(151)
  [span_mlog]     mirror_sanity lwl-aom---   4.00m                     /dev/sdc2(0)



# Exact same setup as before but w/o using cling_by_tags
[root@host-109 ~]# lvconvert -m1 --type mirror mirror_sanity/span
[root@host-109 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Log       Cpy%Sync Devices
  span            mirror_sanity mwi-a-m--- 400.00m  span_mlog 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                     /dev/sdd1(100)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                     /dev/sdb1(100)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                     /dev/sdb1(149)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                     /dev/sdd1(151)
  [span_mlog]     mirror_sanity lwi-aom---   4.00m                     /dev/sdc2(0)


# You would expect span_mimage_0 to remain spanned on /dev/sdd1 and /dev/sdb1, but for span_mimage_1 to be on *one* new PV


3.10.0-313.el7.x86_64
lvm2-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
lvm2-libs-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
lvm2-cluster-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-libs-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-event-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-event-libs-1.02.106-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.129-2.el7    BUILT: Wed Sep  2 02:51:56 CDT 2015

Comment 15 Alasdair Kergon 2016-08-23 23:25:00 UTC
As ever, requires -vvvv from the failing command to see why it's doing what it is doing.

Comment 16 Alasdair Kergon 2016-08-23 23:31:41 UTC
The issue is not that it's using two disks - that's fine, although the normal algorithm is supposed to avoid it (as a side-effect of how it works).  It's that it's picking a disk with a tag that it should be avoiding.  

Since the lvconvert allocation code is going to be amended for the raid work in 7.4, unless the -vvvv shows it to be something simple, it will have to wait.

Comment 17 Alasdair Kergon 2016-08-24 10:09:15 UTC
Easily reproduced and -vvvv obtained.

Leads to 3 questions (that I think got answered before, but perhaps the answers have got lost on this particular code path):

1) How should untagged PVs be treated when some PVs are tagged and others are not?

2) When adding new parallel areas, should the code prefer PVs that have not already been used by the existing ones?

3) When adding new parallel areas with cling_by_tags, should the code prefer PVs that do not share any PV tags with existing ones?

Comment 18 Alasdair Kergon 2016-08-24 10:12:40 UTC
Setting allocation/cling_tag_list to cling_tag_list = [ "@t1", "@t2" ]
Setting allocation/maximise_cling to 1
Allowing allocation on /dev/vde start PE 0 length 8
Allowing allocation on /dev/vde start PE 16 length 60
Allowing allocation on /dev/vdf start PE 0 length 8
Allowing allocation on /dev/vdf start PE 16 length 9 
Allowing allocation on /dev/vdg start PE 0 length 127 
Allowing allocation on /dev/vdh start PE 0 length 255
Parallel PVs at LE 0 length 8: /dev/vde  
Parallel PVs at LE 8 length 8: /dev/vdf 
Trying allocation using contiguous policy. 
Areas to be sorted and filled sequentially.
Still need 17 total extents from 467 remaining (0 positional slots):
  1 (1 data/0 parity) parallel areas of 16 extents each
  1 mirror log of 1 extents each
Considering allocation area 0 as /dev/vdf start PE 16 length 8 leaving 1 with PV tags: t1.
Considering allocation area 1 as /dev/vdg start PE 0 length 8 leaving 119 with PV tags: t2.
Considering allocation area 2 as /dev/vdh start PE 0 length 8 leaving 247 with PV tags: .
Sorting 3 areas
Allocating parallel area 0 on /dev/vdf start PE 16 length 8.
Allocating parallel area 1 on /dev/vdh start PE 0 length 1.
Trying allocation using cling policy.
Cling_to_allocated is set
1 preferred area(s) to be filled positionally.
Still need 8 total extents from 458 remaining (1 positional slots):
  1 (1 data/0 parity) parallel areas of 8 extents each
  0 mirror logs of 1 extents each 
Trying allocation using cling_by_tags policy.
Cling_to_allocated is set
1 preferred area(s) to be filled positionally.
Still need 8 total extents from 458 remaining (1 positional slots):
  1 (1 data/0 parity) parallel areas of 8 extents each
  0 mirror logs of 1 extents each 
Matched allocation PV tag t1 on existing /dev/vde with free space on /dev/vdf.
Considering allocation area 0 as /dev/vde start PE 16 length 60 leaving 0 with PV tags: t1.
Allocating parallel area 0 on /dev/vde start PE 16 length 8.

Comment 19 Alasdair Kergon 2016-08-24 10:17:11 UTC
So it appears that it's the 'contiguous' policy that failed to check the tags against the parallel areas.

Comment 20 Alasdair Kergon 2016-08-24 18:44:49 UTC
New version:

Setting allocation/cling_tag_list to cling_tag_list = [ "@t1", "@t2" ]
Setting allocation/maximise_cling to 1
  Allowing allocation on /dev/vde start PE 0 length 8
  Allowing allocation on /dev/vde start PE 16 length 60
  Allowing allocation on /dev/vdf start PE 0 length 8
  Allowing allocation on /dev/vdf start PE 16 length 9
  Allowing allocation on /dev/vdg start PE 0 length 127
  Allowing allocation on /dev/vdh start PE 0 length 255
  Parallel PVs at LE 0 length 8: /dev/vde(t1)
  Parallel PVs at LE 8 length 8: /dev/vdf(t1)
  Trying allocation using contiguous policy.
  Areas to be sorted and filled sequentially.
  Still need 17 total extents from 467 remaining (0 positional slots):
    1 (1 data/0 parity) parallel areas of 16 extents each
    1 mirror log of 1 extents each
  Not using free space on existing parallel PV /dev/vde.
  Not using free space on /dev/vdf: Matched allocation PV tag t1 on existing parallel PV /dev/vde.
  Considering allocation area 0 as /dev/vdg start PE 0 length 8 leaving 119 with PV tags: t2.
  Considering allocation area 1 as /dev/vdh start PE 0 length 8 leaving 247 with PV tags: .
  Sorting 2 areas
  Allocating parallel area 0 on /dev/vdg start PE 0 length 8.
  Allocating parallel area 1 on /dev/vdh start PE 0 length 1.
  Trying allocation using cling policy.
  Cling_to_allocated is set
  1 preferred area(s) to be filled positionally.
  Still need 8 total extents from 458 remaining (1 positional slots):
    1 (1 data/0 parity) parallel areas of 8 extents each
    0 mirror logs of 1 extents each
  Not using free space on /dev/vde: Matched allocation PV tag t1 on existing parallel PV /dev/vdf.
  Not using free space on existing parallel PV /dev/vdf.
  Considering allocation area 0 as /dev/vdg start PE 8 length 119 leaving 0 with PV tags: t2.
  Allocating parallel area 0 on /dev/vdg start PE 8 length 8.

Comment 21 Alasdair Kergon 2016-08-24 18:51:18 UTC
Upstream for next release.

Improved the debugging messages to make the tags and the ignoring of parallel PVs more obvious.

Changed all policies except anywhere to ignore PVs with tags matching parallel PVs up front in each cycle.

It does however still only consider the matching portion of the parallel PVs in this part of the code, not their whole length as happens later.  This still needs further consideration.

Comment 24 Alasdair Kergon 2016-09-14 13:49:50 UTC
The tags case:

  PV          VG            Fmt  Attr PSize    PFree    PV Tags
  /dev/sdb1   raid_sanity   lvm2 a--  1020.00m 1020.00m A            
  /dev/sdc1   raid_sanity   lvm2 a--  1020.00m 1020.00m B              
  /dev/sdd1   raid_sanity   lvm2 a--  1020.00m 1020.00m A 

Then:
  LV   VG            Attr       LSize    Log Cpy%Sync Convert Devices       
  span raid_sanity   -wi-a----- 400.00m                       /dev/sdb1(100)
  span raid_sanity   -wi-a----- 400.00m                       /dev/sdc1(100)

So the linear LV 'span' contains sdb1 and sdc1 which have tags A and B.  There is no other tag available, so if you restrict the lvconvert to cling_by_tags, it should fail as it now does because there are no PVs available with different tags for the other leg of the mirror to use.  Previously it succeeded, violating the 'cling_by_tags' rule.

Comment 25 Alasdair Kergon 2016-09-14 13:54:06 UTC
To make it succeed, I think you should add another tag, C, and place it in lvm.conf and on different (currently untagged) PVs.  Then the lvconvert should select from only those PVs for the new mirror leg.  In other words the first mirror leg uses only PVs with A and B, and the second one uses only PVs with C.

Comment 26 Corey Marthaler 2016-09-14 21:32:11 UTC
Marking this verified with the caveat that a couple issues still exist. 

3.10.0-501.el7.x86_64

lvm2-2.02.165-2.el7    BUILT: Wed Sep 14 09:01:43 CDT 2016
lvm2-libs-2.02.165-2.el7    BUILT: Wed Sep 14 09:01:43 CDT 2016
lvm2-cluster-2.02.165-2.el7    BUILT: Wed Sep 14 09:01:43 CDT 2016
device-mapper-1.02.134-2.el7    BUILT: Wed Sep 14 09:01:43 CDT 2016
device-mapper-libs-1.02.134-2.el7    BUILT: Wed Sep 14 09:01:43 CDT 2016
device-mapper-event-1.02.134-2.el7    BUILT: Wed Sep 14 09:01:43 CDT 2016
device-mapper-event-libs-1.02.134-2.el7    BUILT: Wed Sep 14 09:01:43 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016


In the following mirror allocation scenarios, Scenarios 3,5,6,7 appear to be fixed now.



### Common setup in all scenarios:
[root@host-117 ~]# pvcreate --setphysicalvolumesize 1G /dev/sdf1 /dev/sdd2 /dev/sdb1 /dev/sde2 /dev/sdf2 /dev/sda2 /dev/sdd1 /dev/sde1 /dev/sdb2 /dev/sda1
  Physical volume "/dev/sdf1" successfully created.
  Physical volume "/dev/sdd2" successfully created.
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/sde2" successfully created.
  Physical volume "/dev/sdf2" successfully created.
  Physical volume "/dev/sda2" successfully created.
  Physical volume "/dev/sdd1" successfully created.
  Physical volume "/dev/sde1" successfully created.
  Physical volume "/dev/sdb2" successfully created.
  Physical volume "/dev/sda1" successfully created.
[root@host-117 ~]# vgcreate mirror_sanity /dev/sdf1 /dev/sdd2 /dev/sdb1 /dev/sde2 /dev/sdf2 /dev/sda2 /dev/sdd1 /dev/sde1 /dev/sdb2 /dev/sda1
  Volume group "mirror_sanity" successfully created
[root@host-117 ~]# lvcreate -L 400m mirror_sanity /dev/sdf1
  Logical volume "lvol0" created.
[root@host-117 ~]# lvcreate -L 400m mirror_sanity /dev/sdd2
  Logical volume "lvol1" created.
[root@host-117 ~]# lvcreate -n span -L 400m mirror_sanity /dev/sdf1:0-150 /dev/sdd2:0-150
  Logical volume "span" created.
[root@host-117 ~]# lvremove -f mirror_sanity/lvol0
  Logical volume "lvol0" successfully removed
[root@host-117 ~]# lvremove -f mirror_sanity/lvol1
  Logical volume "lvol1" successfully removed



### Scenario 1: No tags in lvm.conf, No alloc flags used in convert
[root@host-117 ~]# lvconvert -m1 --type mirror mirror_sanity/span
  Logical volume mirror_sanity/span being converted.
  mirror_sanity/span: Converted: 1.00%
  mirror_sanity/span: Converted: 100.00%

[root@host-117 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Log         Cpy%Sync Devices                          
  span            mirror_sanity mwi-a-m--- 400.00m  [span_mlog] 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdf1(100)                   
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdd2(100)                   
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                       /dev/sdd2(149)                   
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                       /dev/sdf1(151)                   
  [span_mlog]     mirror_sanity lwi-aom---   4.00m                       /dev/sda1(0)                     
# This appears wrong as it's not redundant



### Scenario 2: No tags in lvm.conf, However alloc flags (both cling_by_tags and cling) attempted in convert
[root@host-117 ~]# lvconvert --alloc cling_by_tags -m1 --type mirror mirror_sanity/span
  Insufficient suitable allocatable extents for logical volume : 49 more required
  Unable to allocate extents for mirror(s).
[root@host-117 ~]# lvconvert --alloc cling -m1 --type mirror mirror_sanity/span
  Insufficient suitable allocatable extents for logical volume : 49 more required
  Unable to allocate extents for mirror(s).
# This appears correct since there are no tags present on any PVs to use




### Scenarios 3-5: Tags in lvm.conf, Tags present on particilar PVs, *both* PVs in mimage_0 having tag "A"
[root@host-117 ~]# pvchange --addtag A /dev/sdf1
  Physical volume "/dev/sdf1" changed
  1 physical volume changed / 0 physical volumes not changed
[root@host-117 ~]# pvchange --addtag A /dev/sdd2
  Physical volume "/dev/sdd2" changed
  1 physical volume changed / 0 physical volumes not changed
[root@host-117 ~]# pvchange --addtag B /dev/sdf2
  Physical volume "/dev/sdf2" changed
  1 physical volume changed / 0 physical volumes not changed

[root@host-117 ~]# grep cling_tag_list /etc/lvm/lvm.conf
        cling_tag_list = [ "@A", "@B" ]

[root@host-117 ~]# pvs -a -o +pv_tags
  PV                      VG            Fmt  Attr PSize    PFree    PV Tags
  /dev/mirror_sanity/span                    ---        0        0         
  /dev/sda1               mirror_sanity lvm2 a--  1020.00m 1020.00m        
  /dev/sda2               mirror_sanity lvm2 a--  1020.00m 1020.00m        
  /dev/sdb1               mirror_sanity lvm2 a--  1020.00m 1020.00m        
  /dev/sdb2               mirror_sanity lvm2 a--  1020.00m 1020.00m        
  /dev/sdc1                                  ---        0        0         
  /dev/sdc2                                  ---        0        0         
  /dev/sdd1               mirror_sanity lvm2 a--  1020.00m 1020.00m        
  /dev/sdd2               mirror_sanity lvm2 a--  1020.00m  824.00m A      
  /dev/sde1               mirror_sanity lvm2 a--  1020.00m 1020.00m        
  /dev/sde2               mirror_sanity lvm2 a--  1020.00m 1020.00m        
  /dev/sdf1               mirror_sanity lvm2 a--  1020.00m  816.00m A      
  /dev/sdf2               mirror_sanity lvm2 a--  1020.00m 1020.00m B      


### Scenario 3: cling_by_tags, single PV used for span_mimage_1, however odd that B tagged PV (/dev/sdf2) wasn't used 
[root@host-117 ~]# lvconvert --alloc cling_by_tags -m1 --type mirror mirror_sanity/span
  Logical volume mirror_sanity/span being converted.
  mirror_sanity/span: Converted: 1.00%
  mirror_sanity/span: Converted: 100.00%
[root@host-117 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Log         Cpy%Sync Devices                          
  span            mirror_sanity mwi-a-m--- 400.00m  [span_mlog] 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdf1(100)                   
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdd2(100)                   
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                       /dev/sdb1(0)                     
  [span_mlog]     mirror_sanity lwl-aom---   4.00m                       /dev/sda1(0)                     
# This appears "fixed" when compared to what happened in comment #11 1158172:
[root@host-109 ~]# lvconvert --alloc cling_by_tags -m1 --type mirror mirror_sanity/span
[root@host-109 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Log       Cpy%Sync Devices
  span            mirror_sanity mwi-a-m--- 400.00m  span_mlog 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                     /dev/sdd1(100)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                     /dev/sdb1(100)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                     /dev/sdb1(149)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                     /dev/sdd1(151)
  [span_mlog]     mirror_sanity lwl-aom---   4.00m                     /dev/sdc2(0)




### Scenario 4: cling, single PV used for span_mimage_1, however odd that B tagged PV (/dev/sdf2) wasn't used 
[root@host-117 ~]# lvconvert --alloc cling -m1 --type mirror mirror_sanity/span
  Logical volume mirror_sanity/span being converted.
  mirror_sanity/span: Converted: 1.00%
  mirror_sanity/span: Converted: 100.00%
[root@host-117 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Log         Cpy%Sync Devices                          
  span            mirror_sanity mwi-a-m--- 400.00m  [span_mlog] 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdf1(100)                   
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdd2(100)                   
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                       /dev/sdb1(0)                     
  [span_mlog]     mirror_sanity lwl-aom---   4.00m                       /dev/sda1(0)                     
# This was never attempted in comment #11 1158172




### Scenario 5: no alloc, single PV used for span_mimage_1, this seems fine since there was no request to use tags (and is what I would have expected in Scenario 1) 
[root@host-117 ~]# lvconvert -m1 --type mirror mirror_sanity/span
  Logical volume mirror_sanity/span being converted.
  mirror_sanity/span: Converted: 1.00%
  mirror_sanity/span: Converted: 100.00%
[root@host-117 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Log         Cpy%Sync Devices                          
  span            mirror_sanity mwi-a-m--- 400.00m  [span_mlog] 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdf1(100)                   
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdd2(100)                   
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                       /dev/sdb1(0)                     
  [span_mlog]     mirror_sanity lwi-aom---   4.00m                       /dev/sda1(0)                     
# This appears "fixed" when compared to what happened in comment #11 1158172 (and continues to happen in Scenario 1 when no tags exist):
[root@host-109 ~]# lvconvert -m1 --type mirror mirror_sanity/span
[root@host-109 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Log       Cpy%Sync Devices
  span            mirror_sanity mwi-a-m--- 400.00m  span_mlog 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                     /dev/sdd1(100)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                     /dev/sdb1(100)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                     /dev/sdb1(149)
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                     /dev/sdd1(151)
  [span_mlog]     mirror_sanity lwi-aom---   4.00m                     /dev/sdc2(0)



### Scenarios 6-8: These appear to be "your" scenario in comment #24 where the PVs of the span linear have tag A and tag B, and not both having tag A
[root@host-117 ~]# lvs -a -o +devices
  LV   VG            Attr       LSize    Devices       
  span mirror_sanity -wi-a----- 400.00m  /dev/sdf1(100)
  span mirror_sanity -wi-a----- 400.00m  /dev/sdd2(100)

[root@host-117 ~]# grep cling_tag_list /etc/lvm/lvm.conf
        cling_tag_list = [ "@A", "@B" ]

[root@host-117 ~]# pvchange --addtag A /dev/sdf1
  Physical volume "/dev/sdf1" changed
  1 physical volume changed / 0 physical volumes not changed
[root@host-117 ~]# pvchange --addtag B /dev/sdd2
  Physical volume "/dev/sdd2" changed
  1 physical volume changed / 0 physical volumes not changed
[root@host-117 ~]#  pvchange --addtag A /dev/sdd1
  Physical volume "/dev/sdd1" changed
  1 physical volume changed / 0 physical volumes not changed
[root@host-117 ~]# pvs -a -o +pv_tags
  PV                      VG            Fmt  Attr PSize  PFree  PV Tags
  /dev/mirror_sanity/span                    ---      0      0         
  /dev/sda1               mirror_sanity lvm2 a--   9.99g  9.99g        
  /dev/sda2               mirror_sanity lvm2 a--   9.99g  9.99g        
  /dev/sdb1               mirror_sanity lvm2 a--   9.99g  9.99g        
  /dev/sdb2               mirror_sanity lvm2 a--   9.99g  9.99g        
  /dev/sdc1                             lvm2 ---  10.00g 10.00g        
  /dev/sdc2                                  ---      0      0         
  /dev/sdd1               mirror_sanity lvm2 a--   9.99g  9.99g A      
  /dev/sdd2               mirror_sanity lvm2 a--   9.99g  9.80g B      
  /dev/sde1               mirror_sanity lvm2 a--   9.99g  9.99g        
  /dev/sde2               mirror_sanity lvm2 a--   9.99g  9.99g        
  /dev/sdf1               mirror_sanity lvm2 a--   9.99g  9.79g A      
  /dev/sdf2               mirror_sanity lvm2 a--   9.99g  9.99g        
  /dev/sdg1                             lvm2 ---  10.00g 10.00g        
  /dev/sdg2                                  ---      0      0         
  /dev/sdh1                             lvm2 ---  10.00g 10.00g        
  /dev/sdh2                                  ---      0      0         

[root@host-117 ~]# lvs -a -o +devices
  LV   VG            Attr       LSize    Devices       
  span mirror_sanity -wi-a----- 400.00m  /dev/sdf1(100)
  span mirror_sanity -wi-a----- 400.00m  /dev/sdd2(100)

# Scenario 6:
[root@host-117 ~]# lvconvert --alloc cling_by_tags -m1 --type mirror mirror_sanity/span
  Insufficient suitable allocatable extents for logical volume : 49 more required
  Unable to allocate extents for mirror(s).
# Seems correct since no other tagged PVs exist

# Scenario 7:
[root@host-117 ~]# lvconvert --alloc cling -m1 --type mirror mirror_sanity/span
  Insufficient suitable allocatable extents for logical volume : 49 more required
  Unable to allocate extents for mirror(s).
# Seems correct since no other tagged PVs exist

# Scenario 8:
[root@host-117 ~]# lvconvert -m1 --type mirror mirror_sanity/span
  Logical volume mirror_sanity/span being converted.
  mirror_sanity/span: Converted: 1.00%
  mirror_sanity/span: Converted: 100.00%
[root@host-117 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize    Log         Cpy%Sync Devices                          
  span            mirror_sanity mwi-a-m--- 400.00m  [span_mlog] 100.00   span_mimage_0(0),span_mimage_1(0)
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdf1(100)                   
  [span_mimage_0] mirror_sanity iwi-aom--- 400.00m                       /dev/sdd2(100)                   
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                       /dev/sdd2(0)                     
  [span_mimage_1] mirror_sanity iwi-aom--- 400.00m                       /dev/sdf1(0)                     
  [span_mlog]     mirror_sanity lwi-aom---   4.00m                       /dev/sda1(0)                     
# Seems wrong, same as scenario 1

Comment 28 errata-xmlrpc 2016-11-04 04:08:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html


Note You need to log in before you can comment on or make changes to this bug.