Bug 523324 - RFE: lvm should better attempt to use whole devices when adding mirror legs
Summary: RFE: lvm should better attempt to use whole devices when adding mirror legs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: lvm2
Version: 5.4
Hardware: All
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Alasdair Kergon
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-09-14 21:21 UTC by Corey Marthaler
Modified: 2012-02-21 06:02 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.88-1.el5
Doc Type: Enhancement
Doc Text:
The updated allocation policy now better handles allocation of new segments for multiple segmented mirrors (mirrors which were repeatedly extended).
Clone Of:
Environment:
Last Closed: 2012-02-21 06:02:30 UTC


Attachments (Terms of Use)
verbose output from lvconvert cmd (126.20 KB, text/plain)
2009-10-21 18:52 UTC, Corey Marthaler
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2012:0161 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2012-02-20 15:07:59 UTC

Description Corey Marthaler 2009-09-14 21:21:33 UTC
Description of problem:
This isn't that big a deal, but I noticed when I was testing mirror device failures on 2 similarly segmented mirrors (mirrors that had been extended in the past and who's mimages where contained on multiple devices), that when new legs were added (i.e. an upconvert was attempted) after a successful device failure, the first mirror's new 3rd mimage leg was a whole device yet the second mirror's new 3rd mimage leg was segmented onto two different devices.

Granted I'm partial because this ended up screwing up my test's logic, but with 140G PVs and mirror legs that are only 3G, why is lvm segmenting the new leg on only the one mirror? I would think they should both be only one device, or at least be consistent. 

# Test output:
Recreating PVs /dev/sdd1
  WARNING: Volume group taft is not consistent
  WARNING: Volume Group taft is not consistent
  WARNING: Volume group taft is not consistent
Extending the recreated PVs back into VG taft 
Up converting linear(s) back to mirror(s) on taft-04-bond...
taft-04-bond: lvconvert -m 2 -b taft/mirrorA /dev/sdd1:0-10000 /dev/sdb1:0-10000 /dev/sdc1:0-10000 /dev/sdg1:0-150
taft-04-bond: lvconvert -m 2 -b taft/mirrorB /dev/sdd1:0-10000 /dev/sdb1:0-10000 /dev/sdc1:0-10000 /dev/sdg1:0-150


# Here's what the mirror looks like now, not that mimage_5 on the first mirror only contains /dev/sdd1, and on the second mirror it contains both /dev/sdd1 and /dev/sdb1
[root@taft-01 ~]# lvs -a -o +devices
mirrorA            taft       mwi-ao  3.00G mirrorA_mlog 100.00 mirrorA_mimage_3(0),mirrorA_mimage_4(0),mirrorA_mimage_5(0)
[mirrorA_mimage_3] taft       iwi-ao  3.00G                     /dev/sdb1(0)
[mirrorA_mimage_3] taft       iwi-ao  3.00G                     /dev/sde1(0)
[mirrorA_mimage_4] taft       iwi-ao  3.00G                     /dev/sdc1(0)
[mirrorA_mimage_4] taft       iwi-ao  3.00G                     /dev/sdf1(0)
[mirrorA_mimage_5] taft       iwi-ao  3.00G                     /dev/sdd1(0)
[mirrorA_mlog]     taft       lwi-ao  4.00M                     /dev/sdg1(0)
mirrorB            taft       mwi-ao  3.00G mirrorB_mlog 100.00 mirrorB_mimage_3(0),mirrorB_mimage_4(0),mirrorB_mimage_5(0)
[mirrorB_mimage_3] taft       iwi-ao  3.00G                     /dev/sdb1(512)
[mirrorB_mimage_3] taft       iwi-ao  3.00G                     /dev/sde1(256)
[mirrorB_mimage_4] taft       iwi-ao  3.00G                     /dev/sdc1(512)
[mirrorB_mimage_4] taft       iwi-ao  3.00G                     /dev/sdf1(256)
[mirrorB_mimage_5] taft       iwi-ao  3.00G                     /dev/sdd1(768)
[mirrorB_mimage_5] taft       iwi-ao  3.00G                     /dev/sdb1(1024)
[mirrorB_mlog]     taft       lwi-ao  4.00M                     /dev/sdg1(1)
                                  
[root@taft-01 ~]# pvscan
  PV /dev/sdh1   VG taft         lvm2 [135.66 GB / 135.66 GB free]
  PV /dev/sdf1   VG taft         lvm2 [135.66 GB / 133.66 GB free]
  PV /dev/sde1   VG taft         lvm2 [135.66 GB / 133.66 GB free]
  PV /dev/sdg1   VG taft         lvm2 [135.66 GB / 135.66 GB free]
  PV /dev/sdb1   VG taft         lvm2 [135.66 GB / 130.66 GB free]
  PV /dev/sdc1   VG taft         lvm2 [135.66 GB / 131.66 GB free]
  PV /dev/sdd1   VG taft         lvm2 [135.66 GB / 130.66 GB free]
  PV /dev/sda2   VG VolGroup00   lvm2 [68.12 GB / 0    free]
  Total: 8 [1017.77 GB] / in use: 8 [1017.77 GB] / in no VG: 0 [0   ]


Version-Release number of selected component (if applicable):
2.6.18-160.el5

lvm2-2.02.46-8.el5    BUILT: Thu Jun 18 08:06:12 CDT 2009
lvm2-cluster-2.02.46-8.el5    BUILT: Thu Jun 18 08:05:27 CDT 2009
cmirror-1.1.39-2.el5    BUILT: Mon Jul 27 15:39:05 CDT 2009
kmod-cmirror-0.1.22-1.el5    BUILT: Mon Jul 27 15:28:46 CDT 2009


How reproducible:
Often

Comment 1 Jonathan Earl Brassow 2009-10-14 15:50:01 UTC
Allocation code is hugely complex.  I don't think it's wise to muddle around in it in the middle of a RHEL5 series.

I'll take a quick look, and if it is an oversight, then I will fix it.  However, if it requires allocation code changes, it would be much safer to leave this to a future release.

(Probably 'devel_nack')

Comment 2 Alasdair Kergon 2009-10-14 16:22:40 UTC
Post the actual layouts before and after the changes you're concerned about.  If you run with -vvvv you'll see sections like this in the output, which is what we need to explain how the code is behaving (grep for ^metadata):

metadata/pv_map.c:49   Allowing allocation on /dev/loop0 start PE 3 length 21
metadata/pv_manip.c:272   /dev/loop0 0:      0      3: lvol0(0:0)
metadata/pv_manip.c:272   /dev/loop0 1:      3      3: lvol1(0:0)
metadata/pv_manip.c:272   /dev/loop0 2:      6     18: NULL(0:0)

Comment 3 Corey Marthaler 2009-10-19 19:40:55 UTC
In this example, the final convert of mirror mB from 2-way to 3-way resulted in a leg made up of 3 segments, on only 2 disks. That doesn't make any sense at all. Why start on sdc1 at 375, switch to sdf1, and then go back to sdc1 at 500? I'd think the entire mimage should just be completely on sdc1.

  [mB_mimage_2] taft   iwi-ao  1.46G                  /dev/sdc1(375)
  [mB_mimage_2] taft   iwi-ao  1.46G                  /dev/sdf1(375)
  [mB_mimage_2] taft   iwi-ao  1.46G                  /dev/sdc1(500)


[root@taft-01 ~]# lvs -a -o +devices
  LV            VG     Attr   LSize   Log     Copy%   Devices                      
  mA            taft   mwi-a-  1.46G  mA_mlog 100.00  mA_mimage_0(0),mA_mimage_1(0)
  [mA_mimage_0] taft   iwi-ao  1.46G                  /dev/sdb1(0)                 
  [mA_mimage_0] taft   iwi-ao  1.46G                  /dev/sdb1(250)               
  [mA_mimage_0] taft   iwi-ao  1.46G                  /dev/sdb1(500)               
  [mA_mimage_1] taft   iwi-ao  1.46G                  /dev/sdc1(0)                 
  [mA_mlog]     taft   lwi-ao  4.00M                  /dev/sde1(0)                 
  mB            taft   mwi-a-  1.46G  mB_mlog  69.07  mB_mimage_0(0),mB_mimage_1(0)
  [mB_mimage_0] taft   Iwi-ao  1.46G                  /dev/sdd1(0)                 
  [mB_mimage_1] taft   Iwi-ao  1.46G                  /dev/sdb1(125)               
  [mB_mimage_1] taft   Iwi-ao  1.46G                  /dev/sdb1(375)               
  [mB_mimage_1] taft   Iwi-ao  1.46G                  /dev/sdb1(625)               
  [mB_mlog]     taft   lwi-ao  4.00M                  /dev/sde1(1)                 

[root@taft-01 ~]# lvconvert -m 2 taft/mA
  taft/mA: Converted: 21.3%             
  taft/mA: Converted: 31.7%             
  taft/mA: Converted: 55.2%             
  taft/mA: Converted: 78.1%             
  taft/mA: Converted: 85.1%             
  taft/mA: Converted: 98.4%             
  taft/mA: Converted: 100.0%            
  Logical volume mA converted.          

[root@taft-01 ~]# lvs -a -o +devices
  LV            VG     Attr   LSize   Log     Copy%   Devices                                     
  mA            taft   mwi-a-  1.46G  mA_mlog 100.00  mA_mimage_0(0),mA_mimage_1(0),mA_mimage_2(0)
  [mA_mimage_0] taft   iwi-ao  1.46G                  /dev/sdb1(0)                                
  [mA_mimage_0] taft   iwi-ao  1.46G                  /dev/sdb1(250)                              
  [mA_mimage_0] taft   iwi-ao  1.46G                  /dev/sdb1(500)                              
  [mA_mimage_1] taft   iwi-ao  1.46G                  /dev/sdc1(0)                                
  [mA_mimage_2] taft   iwi-ao  1.46G                  /dev/sdf1(0)                                
  [mA_mlog]     taft   lwi-ao  4.00M                  /dev/sde1(0)                                
  mB            taft   mwi-a-  1.46G  mB_mlog 100.00  mB_mimage_0(0),mB_mimage_1(0)
  [mB_mimage_0] taft   iwi-ao  1.46G                  /dev/sdd1(0)
  [mB_mimage_1] taft   iwi-ao  1.46G                  /dev/sdb1(125)
  [mB_mimage_1] taft   iwi-ao  1.46G                  /dev/sdb1(375)
  [mB_mimage_1] taft   iwi-ao  1.46G                  /dev/sdb1(625)
  [mB_mlog]     taft   lwi-ao  4.00M                  /dev/sde1(1)

[root@taft-01 ~]# lvconvert -m 2 taft/mB
  taft/mB: Converted: 18.7%
  taft/mB: Converted: 40.3%
  taft/mB: Converted: 65.9%
  taft/mB: Converted: 85.3%
  taft/mB: Converted: 100.0%
  Logical volume mB converted.

[root@taft-01 ~]# lvs -a -o +devices
  LV            VG     Attr   LSize   Log     Copy%   Devices
  mA            taft   mwi-a-  1.46G  mA_mlog 100.00  mA_mimage_0(0),mA_mimage_1(0),mA_mimage_2(0)
  [mA_mimage_0] taft   iwi-ao  1.46G                  /dev/sdb1(0)
  [mA_mimage_0] taft   iwi-ao  1.46G                  /dev/sdb1(250)
  [mA_mimage_0] taft   iwi-ao  1.46G                  /dev/sdb1(500)
  [mA_mimage_1] taft   iwi-ao  1.46G                  /dev/sdc1(0)
  [mA_mimage_2] taft   iwi-ao  1.46G                  /dev/sdf1(0)
  [mA_mlog]     taft   lwi-ao  4.00M                  /dev/sde1(0)
  mB            taft   mwi-a-  1.46G  mB_mlog 100.00  mB_mimage_0(0),mB_mimage_1(0),mB_mimage_2(0)
  [mB_mimage_0] taft   iwi-ao  1.46G                  /dev/sdd1(0)
  [mB_mimage_1] taft   iwi-ao  1.46G                  /dev/sdb1(125)
  [mB_mimage_1] taft   iwi-ao  1.46G                  /dev/sdb1(375)
  [mB_mimage_1] taft   iwi-ao  1.46G                  /dev/sdb1(625)
  [mB_mimage_2] taft   iwi-ao  1.46G                  /dev/sdc1(375)
  [mB_mimage_2] taft   iwi-ao  1.46G                  /dev/sdf1(375)
  [mB_mimage_2] taft   iwi-ao  1.46G                  /dev/sdc1(500)
  [mB_mlog]     taft   lwi-ao  4.00M                  /dev/sde1(1)

Comment 5 Corey Marthaler 2009-10-21 18:52:06 UTC
Created attachment 365577 [details]
verbose output from lvconvert cmd

This is the -vvvv from the lvconvert cmd that created the 3rd mimage 

 lvconvert -vvvv -m 2 taft/mB > out 2>&1

[mB_mimage_2] taft iwi-ao  1.46G  /dev/sdc1(375)                              
[mB_mimage_2] taft iwi-ao  1.46G  /dev/sdf1(375)                              
[mB_mimage_2] taft iwi-ao  1.46G  /dev/sdc1(500)

Comment 11 Alasdair Kergon 2010-11-30 01:19:16 UTC
The reason it's doing this is it's divided into 3 segments and it's doing these in order, taking the largest remaining each time, and that largest bounces between the two available disks, so you get c then f then c.  To fix this, it needs to understand that it should preferentially allocate contiguously to the space it just allocated I suppose (even if that causes a new split).  I'll think about it.

Comment 12 RHEL Product and Program Management 2011-01-11 20:53:42 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated in the
current release, Red Hat is unfortunately unable to address this
request at this time. Red Hat invites you to ask your support
representative to propose this request, if appropriate and relevant,
in the next release of Red Hat Enterprise Linux.

Comment 13 RHEL Product and Program Management 2011-01-11 23:04:24 UTC
This request was erroneously denied for the current release of
Red Hat Enterprise Linux.  The error has been fixed and this
request has been re-proposed for the current release.

Comment 14 Alasdair Kergon 2011-02-22 16:22:53 UTC
I'm still struggling to specify the precise requirement here.

Elsewhere, we have requests to maximise the use of available disks.

We try policies contiguous, cling and normal in turn.

Cling only takes account of space already part of the LV so doesn't apply in this example.  But I'm wondering if we need to consider 'cling' within the 'normal' policy too, checking against devices used by not-yet-committed extents within the same allocation transaction.

Comment 15 Alasdair Kergon 2011-02-27 02:00:09 UTC
I checked in some code.  I extended part of the cling policy to take effect within the normal policy.  I also extended it to take account of already-reserved-but-not-yet-committed extents.  This way, it can fill some parallel areas using the cling policy and the remaining ones using the (old) normal policy.  I also adjusted some of the -vvvv log messages to make some of the decisions it's taking a bit clearer.

There's an lvm.conf setting to (largely) disable the new behaviour, in case new strange cases are discovered.

http://www.redhat.com/archives/lvm-devel/2011-February/msg00143.html

This is still a work-in-progress as people find more configurations where the layout the tools select could be improved.

Comment 17 Milan Broz 2011-04-26 09:46:12 UTC
Change to allocation policy need to be tested in upstream code, because there was not yet any upstream release including it, changes in 5.7 are too risky.

Moving this to 5.8 timeframe (it will appear in rebased lvm2 code in 5.8).

Comment 19 Brian Gollaher 2011-08-17 17:06:54 UTC
This functionality expected in rebased LVM

Comment 21 Milan Broz 2011-10-17 21:36:23 UTC
Fixed in lvm2-2.02.88-1.el5.

Comment 23 Corey Marthaler 2011-11-29 21:49:03 UTC
This appears to be fixed in the latest rpms based on one iteration of segmented mirror device failure. That said, I'm unable to run more than one iteration of this test due to bug 751135.

2.6.18-274.el5

lvm2-2.02.88-4.el5    BUILT: Wed Nov 16 09:40:55 CST 2011
lvm2-cluster-2.02.88-4.el5    BUILT: Wed Nov 16 09:46:51 CST 2011
device-mapper-1.02.67-2.el5    BUILT: Mon Oct 17 08:31:56 CDT 2011
device-mapper-event-1.02.67-2.el5    BUILT: Mon Oct 17 08:31:56 CDT 2011
cmirror-1.1.39-10.el5    BUILT: Wed Sep  8 16:32:05 CDT 2010
kmod-cmirror-0.1.22-3.el5    BUILT: Tue Dec 22 13:39:47 CST 2009


================================================================================
Iteration 0.1 started at Mon Nov 28 18:24:01 CST 2011
================================================================================
warning: mirrorA_mimage_0 is segmented, returning only the first device in this mimage
warning: mirrorA_mimage_1 is segmented, returning only the first device in this mimage
warning: mirrorA_mimage_2 is segmented, returning only the first device in this mimage

Scenario kill_random_legs: Kill random legs

********* Mirror info for this scenario *********
* mirrors:            mirrorA mirrorB
* leg devices:        /dev/sdb1 /dev/sdc1 /dev/sdd1
* log devices:        /dev/sde1
* failpv(s):          /dev/sdd1
* failnode(s):        taft-01 taft-02 taft-03 taft-04
* leg fault policy:   allocate
* log fault policy:   allocate
*************************************************

Mirror Structure(s):
  LV                 Attr   LSize  Copy%  Devices                                                    
  mirrorA            mwi-ao  1.00G 100.00 mirrorA_mimage_0(0),mirrorA_mimage_1(0),mirrorA_mimage_2(0)
  [mirrorA_mimage_0] iwi-ao  1.00G        /dev/sdb1(0)                                               
  [mirrorA_mimage_0] iwi-ao  1.00G        /dev/sdb1(250)                                             
  [mirrorA_mimage_1] iwi-ao  1.00G        /dev/sdc1(0)                                               
  [mirrorA_mimage_1] iwi-ao  1.00G        /dev/sdc1(250)                                             
  [mirrorA_mimage_2] iwi-ao  1.00G        /dev/sdd1(0)                                               
  [mirrorA_mimage_2] iwi-ao  1.00G        /dev/sdd1(250)                                             
  [mirrorA_mlog]     lwi-ao  4.00M        /dev/sde1(0)                                               
  mirrorB            mwi-ao  1.00G 100.00 mirrorB_mimage_0(0),mirrorB_mimage_1(0),mirrorB_mimage_2(0)
  [mirrorB_mimage_0] iwi-ao  1.00G        /dev/sdb1(125)                                             
  [mirrorB_mimage_0] iwi-ao  1.00G        /dev/sdb1(381)                                             
  [mirrorB_mimage_1] iwi-ao  1.00G        /dev/sdc1(125)                                             
  [mirrorB_mimage_1] iwi-ao  1.00G        /dev/sdc1(381)                                             
  [mirrorB_mimage_2] iwi-ao  1.00G        /dev/sdd1(125)                                             
  [mirrorB_mimage_2] iwi-ao  1.00G        /dev/sdd1(381)                                             
  [mirrorB_mlog]     lwi-ao  4.00M        /dev/sde1(1)                                               

PV=/dev/sdd1
        mirrorA_mimage_2: 3.1
        mirrorA_mimage_2: 3.1
        mirrorB_mimage_2: 3.1
        mirrorB_mimage_2: 3.1
PV=/dev/sdd1
        mirrorA_mimage_2: 3.1
        mirrorA_mimage_2: 3.1
        mirrorB_mimage_2: 3.1
        mirrorB_mimage_2: 3.1

Writing verification files (checkit) to mirror(s) on...
        ---- taft-01 ----
        ---- taft-02 ----
        ---- taft-03 ----
        ---- taft-04 ----

Sleeping 10 seconds to get some outsanding GFS I/O locks before the failure 
Verifying files (checkit) on mirror(s) on...
        ---- taft-01 ----
        ---- taft-02 ----
        ---- taft-03 ----
        ---- taft-04 ----

Disabling device sdd on taft-01
Disabling device sdd on taft-02
Disabling device sdd on taft-03
Disabling device sdd on taft-04

Attempting I/O to cause mirror down conversion(s) on taft-01
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.387624 seconds, 108 MB/s
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.289258 seconds, 145 MB/s

Verifying current sanity of lvm after the failure
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.

Mirror Structure(s):
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
  LV                    Attr   LSize  Copy%  Devices                                                    
  mirrorA               mwi-ao  1.00G 100.00 mirrorA_mimage_0(0),mirrorA_mimage_1(0),mirrorA_mimage_2(0)
  [mirrorA_mimage_0]    iwi-ao  1.00G        /dev/sdb1(0)                                               
  [mirrorA_mimage_0]    iwi-ao  1.00G        /dev/sdb1(250)                                             
  [mirrorA_mimage_1]    iwi-ao  1.00G        /dev/sdc1(0)                                               
  [mirrorA_mimage_1]    iwi-ao  1.00G        /dev/sdc1(250)                                             
  [mirrorA_mimage_2]    iwi-ao  1.00G        /dev/sdf1(0)                                               
  [mirrorA_mlog]        lwi-ao  4.00M        /dev/sde1(0)                                               
  mirrorB               cwi-ao  1.00G 100.00 mirrorB_mimagetmp_2(0),mirrorB_mimage_2(0)                 
  [mirrorB_mimage_0]    iwi-ao  1.00G        /dev/sdb1(125)                                             
  [mirrorB_mimage_0]    iwi-ao  1.00G        /dev/sdb1(381)                                             
  [mirrorB_mimage_1]    iwi-ao  1.00G        /dev/sdc1(125)                                             
  [mirrorB_mimage_1]    iwi-ao  1.00G        /dev/sdc1(381)                                             
  [mirrorB_mimage_2]    iwi-ao  1.00G        /dev/sdf1(256)                                             
  [mirrorB_mimagetmp_2] mwi-ao  1.00G 100.00 mirrorB_mimage_0(0),mirrorB_mimage_1(0)                    
  [mirrorB_mlog]        lwi-ao  4.00M        /dev/sde1(1)                                               

Verify that each of the mirror repairs finished successfully

Verifying FAILED device /dev/sdd1 is *NOT* in the volume(s)
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
olog: 1
Verifying LOG device(s) /dev/sde1 *ARE* in the mirror(s)
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
Verifying LEG device /dev/sdb1 *IS* in the volume(s)
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
Verifying LEG device /dev/sdc1 *IS* in the volume(s)
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
verify the newly allocated dm devices were added as a result of the failures
Checking EXISTENCE of mirrorA_mimage_2 on:  taft-01 taft-02 taft-03 taft-04
Checking EXISTENCE of mirrorA_mimage_2 on:  taft-01 taft-02 taft-03 taft-04
Checking EXISTENCE of mirrorB_mimage_2 on:  taft-01 taft-02 taft-03 taft-04
Checking EXISTENCE of mirrorB_mimage_2 on:  taft-01 taft-02 taft-03 taft-04

Verify that the mirror image order remains the same after the down conversion
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
warning: mirrorA_mimage_0 is segmented, returning only the first device in this mimage
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
warning: mirrorA_mimage_1 is segmented, returning only the first device in this mimage
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
warning: mirrorB_mimage_0 is segmented, returning only the first device in this mimage
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
warning: mirrorB_mimage_1 is segmented, returning only the first device in this mimage
  Couldn't find device with uuid zhgtC4-B7pd-z7vz-N7VN-glzU-uvUz-DZAmcX.
Verifying files (checkit) on mirror(s) on...
        ---- taft-01 ----
        ---- taft-02 ----
        ---- taft-03 ----
        ---- taft-04 ----

Enabling device sdd on taft-01
Enabling device sdd on taft-02
Enabling device sdd on taft-03
Enabling device sdd on taft-04

  WARNING: Inconsistent metadata found for VG taft - updating to use version 23
Recreating PVs /dev/sdd1
  WARNING: Volume group taft is not consistent
  Writing physical volume data to disk "/dev/sdd1"
Extending the recreated PVs back into VG taft

Waiting until all mirrors become fully syncd...
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )

Verifying files (checkit) on mirror(s) on...
        ---- taft-01 ----
        ---- taft-02 ----
        ---- taft-03 ----
        ---- taft-04 ----

Stopping the io load (collie/xdoio) on mirror(s)

Comment 24 Milan Broz 2011-12-06 23:00:24 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
The updated allocation policy now better handles allocation of new segments for multiple segmented mirrors (mirrors which were repeatedly extended).

Comment 25 errata-xmlrpc 2012-02-21 06:02:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0161.html


Note You need to log in before you can comment on or make changes to this bug.