Bug 1141386 - unable to change cluster VG attribute when it contains exclusively activated raid + snapshot
Summary: unable to change cluster VG attribute when it contains exclusively activated ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.6
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-09-12 22:11 UTC by Corey Marthaler
Modified: 2015-07-22 07:35 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.117-1.el6
Doc Type: Bug Fix
Doc Text:
When clustered locking was selected the command improperly passed locks around the cluster. This has made changing of the volume group clustering attribute malfunction. Code is now correctly checking and propagating locks even for non-clustered VGs when clustered locking is used.
Clone Of:
Environment:
Last Closed: 2015-07-22 07:35:31 UTC


Attachments (Terms of Use)
-vvvv of the vgchange (48.17 KB, text/plain)
2014-09-15 19:12 UTC, Corey Marthaler
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:1411 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2015-07-20 18:06:52 UTC

Description Corey Marthaler 2014-09-12 22:11:41 UTC
Description of problem:
In this scenario, the snapshot volumes are already inactive before attemping to change the cluster attribute.

[root@grant-01 ~]# vgs
  VG          #PV #LV #SN Attr   VSize  VFree
  raid_2_9109   8   6   5 wz--nc  1.95t 1.95t

[root@grant-01 ~]# lvs -a -o +devices
  LV                      VG          Attr       LSize   Origin       Devices
  raid_2_91090            raid_2_9109 owi---r---   1.00g              raid_2_91090_rimage_0(0),raid_2_91090_rimage_1(0),raid_2_91090_rimage_2(0)
  [raid_2_91090_rimage_0] raid_2_9109 Iwi---r---   1.00g              /dev/mapper/mpathap1(1)
  [raid_2_91090_rimage_1] raid_2_9109 Iwi---r---   1.00g              /dev/mapper/mpathbp1(1)
  [raid_2_91090_rimage_2] raid_2_9109 Iwi---r---   1.00g              /dev/mapper/mpathcp1(1)
  [raid_2_91090_rmeta_0]  raid_2_9109 ewi---r---   4.00m              /dev/mapper/mpathap1(0)
  [raid_2_91090_rmeta_1]  raid_2_9109 ewi---r---   4.00m              /dev/mapper/mpathbp1(0)
  [raid_2_91090_rmeta_2]  raid_2_9109 ewi---r---   4.00m              /dev/mapper/mpathcp1(0)
  snap0                   raid_2_9109 swi---s--- 100.00m raid_2_91090 /dev/mapper/mpathap1(257)
  snap1                   raid_2_9109 swi---s--- 100.00m raid_2_91090 /dev/mapper/mpathap1(282)
  snap2                   raid_2_9109 swi---s--- 100.00m raid_2_91090 /dev/mapper/mpathap1(307)
  snap3                   raid_2_9109 swi---s--- 100.00m raid_2_91090 /dev/mapper/mpathap1(332)
  snap4                   raid_2_9109 swi---s--- 100.00m raid_2_91090 /dev/mapper/mpathap1(357)

[root@grant-01 ~]# vgchange -cn raid_2_9109
  Snapshot logical volumes must be inactive when changing the cluster attribute.

Version-Release number of selected component (if applicable):
2.6.32-502.el6.x86_64

lvm2-2.02.111-2.el6    BUILT: Mon Sep  1 06:46:43 CDT 2014
lvm2-libs-2.02.111-2.el6    BUILT: Mon Sep  1 06:46:43 CDT 2014
lvm2-cluster-2.02.111-2.el6    BUILT: Mon Sep  1 06:46:43 CDT 2014
udev-147-2.57.el6    BUILT: Thu Jul 24 08:48:47 CDT 2014
device-mapper-1.02.90-2.el6    BUILT: Mon Sep  1 06:46:43 CDT 2014
device-mapper-libs-1.02.90-2.el6    BUILT: Mon Sep  1 06:46:43 CDT 2014
device-mapper-event-1.02.90-2.el6    BUILT: Mon Sep  1 06:46:43 CDT 2014
device-mapper-event-libs-1.02.90-2.el6    BUILT: Mon Sep  1 06:46:43 CDT 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 08:43:06 CDT 2014
cmirror-2.02.111-2.el6    BUILT: Mon Sep  1 06:46:43 CDT 2014

Comment 1 Peter Rajnoha 2014-09-15 08:35:57 UTC
This seems to be working on my machine:

[root@rhel6-b ~]# vgcreate vg /dev/sd[a-p]
  Clustered volume group "vg" successfully created

[root@rhel6-b ~]# lvcreate -l1 -m1 --type raid1 vg
  Logical volume "lvol0" created

[root@rhel6-b ~]# lvcreate -l1 -s vg/lvol0
  Logical volume "lvol1" created

[root@rhel6-b ~]# lvcreate -l1 -s vg/lvol0
  Logical volume "lvol2" created

[root@rhel6-b ~]# lvcreate -l1 -s vg/lvol0
  Logical volume "lvol3" created

[root@rhel6-b ~]# vgchange -an vg
  0 logical volume(s) in volume group "vg" now active

[root@rhel6-b ~]# lvs -a -o name,vg_name,attr,size,origin,layout,role vg
  LV               VG   Attr       LSize Origin Layout     Role                                      
  lvol0            vg   owi---r--- 4.00m        raid,raid1 public,origin,thickorigin,multithickorigin
  [lvol0_rimage_0] vg   Iwi---r--- 4.00m        linear     private,raid,image                        
  [lvol0_rimage_1] vg   Iwi---r--- 4.00m        linear     private,raid,image                        
  [lvol0_rmeta_0]  vg   ewi---r--- 4.00m        linear     private,raid,metadata                     
  [lvol0_rmeta_1]  vg   ewi---r--- 4.00m        linear     private,raid,metadata                     
  lvol1            vg   swi---s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot             
  lvol2            vg   swi---s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot             
  lvol3            vg   swi---s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot             

[root@rhel6-b ~]# vgchange -cn vg
  Volume group "vg" successfully changed

[root@rhel6-b ~]# vgs
  VG       #PV #LV #SN Attr   VSize VFree
  VolGroup   1   2   0 wz--n- 9.51g    0 
  vg        16   4   3 wz--n- 1.94g 1.91g


Can you post the -vvvv of the vgchange -cn?

Comment 2 Peter Rajnoha 2014-09-15 08:53:05 UTC
(In reply to Peter Rajnoha from comment #1)
> Can you post the -vvvv of the vgchange -cn?

+ dmsetup info -c and dmsetup table when the problem appears

Comment 3 Peter Rajnoha 2014-09-15 09:11:09 UTC
(In reply to Peter Rajnoha from comment #1)
> This seems to be working on my machine:
> 
> [root@rhel6-b ~]# vgcreate vg /dev/sd[a-p]
>   Clustered volume group "vg" successfully created
> 
> [root@rhel6-b ~]# lvcreate -l1 -m1 --type raid1 vg
>   Logical volume "lvol0" created
> 
> [root@rhel6-b ~]# lvcreate -l1 -s vg/lvol0
>   Logical volume "lvol1" created
> 
> [root@rhel6-b ~]# lvcreate -l1 -s vg/lvol0
>   Logical volume "lvol2" created
> 
> [root@rhel6-b ~]# lvcreate -l1 -s vg/lvol0
>   Logical volume "lvol3" created
> 
> [root@rhel6-b ~]# vgchange -an vg
>   0 logical volume(s) in volume group "vg" now active

(...this deactivation always leaves dangling symlinks in /dev/vg/ for  all snapshot LVs in my case - I've filed a new udev bug #1141690 for that)

Comment 4 Corey Marthaler 2014-09-15 19:12:24 UTC
Created attachment 937705 [details]
-vvvv of the vgchange

Comment 5 Corey Marthaler 2014-09-15 19:13:10 UTC
[root@taft-01 ~]# vgchange -an raid_1_609
  0 logical volume(s) in volume group "raid_1_609" now active

[root@taft-01 ~]# lvs -a -o name,vg_name,attr,size,origin,layout,role raid_1_609
  LV                     VG         Attr       LSize   Origin      Layout     Role                                      
  raid_1_6090            raid_1_609 owi---r---   1.00g             raid,raid1 public,origin,thickorigin,multithickorigin
  [raid_1_6090_rimage_0] raid_1_609 Iwi---r---   1.00g             linear     private,raid,image                        
  [raid_1_6090_rimage_1] raid_1_609 Iwi---r---   1.00g             linear     private,raid,image                        
  [raid_1_6090_rmeta_0]  raid_1_609 ewi---r---   4.00m             linear     private,raid,metadata                     
  [raid_1_6090_rmeta_1]  raid_1_609 ewi---r---   4.00m             linear     private,raid,metadata                     
  snap0                  raid_1_609 swi---s--- 100.00m raid_1_6090 linear     public,snapshot,thicksnapshot             
  snap1                  raid_1_609 swi---s--- 100.00m raid_1_6090 linear     public,snapshot,thicksnapshot             
  snap2                  raid_1_609 swi---s--- 100.00m raid_1_6090 linear     public,snapshot,thicksnapshot             

[root@taft-01 ~]# vgchange -cn raid_1_609
  Snapshot logical volumes must be inactive when changing the cluster attribute.
[root@taft-01 ~]# vgchange -vvvv -cn raid_1_609 > /tmp/vgchange 2>&1


[root@taft-01 ~]# lvchange -an raid_1_609/snap1
Change of snapshot raid_1_609/snap1 will also change its origin raid_1_609/raid_1_6090 and 2 other snapshot(s). Proceed? [y/n]: y
[root@taft-01 ~]# vgchange -cn raid_1_609
  Snapshot logical volumes must be inactive when changing the cluster attribute.

[root@taft-01 ~]# lvchange -an raid_1_609/snap0
Change of snapshot raid_1_609/snap0 will also change its origin raid_1_609/raid_1_6090 and 2 other snapshot(s). Proceed? [y/n]: y
[root@taft-01 ~]# lvchange -an raid_1_609/snap2
Change of snapshot raid_1_609/snap2 will also change its origin raid_1_609/raid_1_6090 and 2 other snapshot(s). Proceed? [y/n]: y

[root@taft-01 ~]# vgchange -cn raid_1_609
  Snapshot logical volumes must be inactive when changing the cluster attribute.

Comment 6 Zdenek Kabelac 2014-09-15 19:44:09 UTC
Test for active volumes is not correct - it's querying snap1 - while the only lock-holder is raid_1_6090.

I've been already suspecting there is somewhere a bug inside code which in some code paths may take wrong lock.

Comment 7 Zdenek Kabelac 2014-09-16 09:47:51 UTC
Upstream commit disables change of clustered attribute when any LV in VG is active:

http://www.redhat.com/archives/lvm-devel/2014-September/msg00084.html

Comment 8 Alasdair Kergon 2014-09-16 10:40:56 UTC
Please spell out in more detail why this change can no longer be applied to active linear and striped volumes like it used to be.

Comment 9 Corey Marthaler 2014-09-16 21:21:25 UTC
I forgot a step needed that the test was doing to cause this state, a rename of one of the snap volumes (and then a rename back).

[root@hayes-01 ~]# vgcreate vg /dev/mapper/mpath[abcdefg]p*
  Clustered volume group "vg" successfully created

[root@hayes-01 ~]# lvcreate -l1 -m1 --type raid1 vg
  Logical volume "lvol0" created
[root@hayes-01 ~]# lvcreate -l1 -s vg/lvol0
  Logical volume "lvol1" created
[root@hayes-01 ~]# lvcreate -l1 -s vg/lvol0
  Logical volume "lvol2" created
[root@hayes-01 ~]# lvcreate -l1 -s vg/lvol0
  Logical volume "lvol3" created

[root@hayes-01 ~]# lvrename vg/lvol1 vg/lvol1B
  Renamed "lvol1" to "lvol1B" in volume group "vg"

[root@hayes-01 ~]# lvs -a -o name,vg_name,attr,size,origin,layout,role vg
  LV               VG   Attr       LSize Origin Layout     Role                                      
  lvol0            vg   owi-a-r--- 4.00m        raid,raid1 public,origin,thickorigin,multithickorigin
  [lvol0_rimage_0] vg   iwi-aor--- 4.00m        linear     private,raid,image                        
  [lvol0_rimage_1] vg   iwi-aor--- 4.00m        linear     private,raid,image                        
  [lvol0_rmeta_0]  vg   ewi-aor--- 4.00m        linear     private,raid,metadata                     
  [lvol0_rmeta_1]  vg   ewi-aor--- 4.00m        linear     private,raid,metadata                     
  lvol1B           vg   swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot             
  lvol2            vg   swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot             
  lvol3            vg   swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot
             
[root@hayes-01 ~]# lvrename vg/lvol1B vg/lvol1
  Renamed "lvol1B" to "lvol1" in volume group "vg"

[root@hayes-01 ~]# lvs -a -o name,vg_name,attr,size,origin,layout,role vg
  LV               VG   Attr       LSize Origin Layout     Role                                      
  lvol0            vg   owi-a-r--- 4.00m        raid,raid1 public,origin,thickorigin,multithickorigin
  [lvol0_rimage_0] vg   iwi-aor--- 4.00m        linear     private,raid,image                        
  [lvol0_rimage_1] vg   iwi-aor--- 4.00m        linear     private,raid,image                        
  [lvol0_rmeta_0]  vg   ewi-aor--- 4.00m        linear     private,raid,metadata                     
  [lvol0_rmeta_1]  vg   ewi-aor--- 4.00m        linear     private,raid,metadata                     
  lvol1            vg   swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot             
  lvol2            vg   swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot             
  lvol3            vg   swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot     
        
[root@hayes-01 ~]# vgchange -an vg
  0 logical volume(s) in volume group "vg" now active

[root@hayes-01 ~]# lvs -a -o name,vg_name,attr,size,origin,layout,role vg
  LV               VG   Attr       LSize Origin Layout     Role                                      
  lvol0            vg   owi---r--- 4.00m        raid,raid1 public,origin,thickorigin,multithickorigin
  [lvol0_rimage_0] vg   Iwi---r--- 4.00m        linear     private,raid,image                        
  [lvol0_rimage_1] vg   Iwi---r--- 4.00m        linear     private,raid,image                        
  [lvol0_rmeta_0]  vg   ewi---r--- 4.00m        linear     private,raid,metadata                     
  [lvol0_rmeta_1]  vg   ewi---r--- 4.00m        linear     private,raid,metadata                     
  lvol1            vg   swi---s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot             
  lvol2            vg   swi---s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot             
  lvol3            vg   swi---s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot

[root@hayes-01 ~]# vgchange -cn vg
  Snapshot logical volumes must be inactive when changing the cluster attribute.
[root@hayes-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  vg          14   4   3 wz--nc  1.71t 1.71t
  vg_hayes01   1   3   0 wz--n- 74.01g    0 

[root@hayes-01 ~]# dmsetup info -c
Name               Maj Min Stat Open Targ Event  UUID                                                                
mpathe             253  16 L--w    2    1      1 mpath-36006016028a03400c76094780827e411                             
mpathap2           253   7 L--w    0    1      0 part2-mpath-36006016028a03400bf6094780827e411                       
mpathbp2           253  11 L--w    0    1      0 part2-mpath-36006016028a03400c16094780827e411                       
mpathd             253   9 L--w    2    1      1 mpath-36006016028a03400c56094780827e411                             
mpathap1           253   5 L--w    0    1      0 part1-mpath-36006016028a03400bf6094780827e411                       
mpathcp2           253  12 L--w    0    1      0 part2-mpath-36006016028a03400c36094780827e411                       
mpathbp1           253   8 L--w    0    1      0 part1-mpath-36006016028a03400c16094780827e411                       
mpathc             253   6 L--w    2    1      1 mpath-36006016028a03400c36094780827e411                             
mpathdp2           253  15 L--w    0    1      0 part2-mpath-36006016028a03400c56094780827e411                       
mpathcp1           253  10 L--w    0    1      0 part1-mpath-36006016028a03400c36094780827e411                       
mpathep2           253  22 L--w    0    1      0 part2-mpath-36006016028a03400c76094780827e411                       
mpathb             253   4 L--w    2    1      1 mpath-36006016028a03400c16094780827e411                             
mpathdp1           253  14 L--w    0    1      0 part1-mpath-36006016028a03400c56094780827e411                       
mpathfp2           253  19 L--w    0    1      0 part2-mpath-36006016028a03400c96094780827e411                       
mpathep1           253  21 L--w    0    1      0 part1-mpath-36006016028a03400c76094780827e411                       
mpathgp2           253  24 L--w    0    1      0 part2-mpath-36006016028a03400cb6094780827e411                       
mpatha             253   3 L--w    2    1      1 mpath-36006016028a03400bf6094780827e411                             
mpathfp1           253  17 L--w    0    1      0 part1-mpath-36006016028a03400c96094780827e411                       
mpathhp2           253  26 L--w    0    1      0 part2-mpath-36006016028a03400cd6094780827e411                       
mpathgp1           253  23 L--w    0    1      0 part1-mpath-36006016028a03400cb6094780827e411                       
mpathhp1           253  25 L--w    0    1      0 part1-mpath-36006016028a03400cd6094780827e411                       
vg_hayes01-lv_home 253   2 L--w    1    1      0 LVM-JXuzXGyMyJfjxHUImrwyY1rXdepCYw0Cp5bO2dXPcijwjPVWejETBXtqigXApTAZ
mpathh             253  20 L--w    2    1      1 mpath-36006016028a03400cd6094780827e411                             
mpathg             253  18 L--w    2    1      1 mpath-36006016028a03400cb6094780827e411                             
mpathf             253  13 L--w    2    1      1 mpath-36006016028a03400c96094780827e411                             
vg_hayes01-lv_swap 253   1 L--w    1    1      0 LVM-JXuzXGyMyJfjxHUImrwyY1rXdepCYw0C4pcuz7b3dRriw3qqtmNu45mYh85lgaK5
vg_hayes01-lv_root 253   0 L--w    1    1      0 LVM-JXuzXGyMyJfjxHUImrwyY1rXdepCYw0CDzmwDqWXLBl7kOnQOKl43xDGcUu30Wkw

[root@hayes-01 ~]# dmsetup table
mpathe: 0 524288000 multipath 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 8:80 1 8:208 1 round-robin 0 2 1 65:80 1 65:208 1 
mpathap2: 0 262132605 linear 253:3 262148670
mpathbp2: 0 262132605 linear 253:4 262148670
mpathd: 0 524288000 multipath 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 65:64 1 65:192 1 round-robin 0 2 1 8:64 1 8:192 1 
mpathap1: 0 262148607 linear 253:3 63
mpathcp2: 0 262132605 linear 253:6 262148670
mpathbp1: 0 262148607 linear 253:4 63
mpathc: 0 524288000 multipath 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 8:48 1 8:176 1 round-robin 0 2 1 65:48 1 65:176 1 
mpathdp2: 0 262132605 linear 253:9 262148670
mpathcp1: 0 262148607 linear 253:6 63
mpathep2: 0 262132605 linear 253:16 262148670
mpathb: 0 524288000 multipath 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 65:160 1 65:32 1 round-robin 0 2 1 8:32 1 8:160 1 
mpathdp1: 0 262148607 linear 253:9 63
mpathfp2: 0 262132605 linear 253:13 262148670
mpathep1: 0 262148607 linear 253:16 63
mpathgp2: 0 262132605 linear 253:18 262148670
mpatha: 0 524288000 multipath 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 8:16 1 8:144 1 round-robin 0 2 1 65:144 1 65:16 1 
mpathfp1: 0 262148607 linear 253:13 63
mpathhp2: 0 262132605 linear 253:20 262148670
mpathgp1: 0 262148607 linear 253:18 63
mpathhp1: 0 262148607 linear 253:20 63
vg_hayes01-lv_home: 0 34734080 linear 8:2 104859648
mpathh: 0 524288000 multipath 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 65:128 1 66:0 1 round-robin 0 2 1 8:128 1 65:0 1 
mpathg: 0 524288000 multipath 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 8:112 1 8:240 1 round-robin 0 2 1 65:112 1 65:240 1 
mpathf: 0 524288000 multipath 1 queue_if_no_path 1 emc 2 1 round-robin 0 2 1 65:96 1 65:224 1 round-robin 0 2 1 8:96 1 8:224 1 
vg_hayes01-lv_swap: 0 15622144 linear 8:2 139593728
vg_hayes01-lv_root: 0 104857600 linear 8:2 2048

Comment 10 Zdenek Kabelac 2014-09-17 06:50:09 UTC
Ahh ok - then we have the explanation for the lock being reported for the snapshot volume.

This has been already fixed with:

https://www.redhat.com/archives/lvm-devel/2014-September/msg00019.html

Another patch will still follow to reenable functionality for conversion to clustered volumes with active LVs.

Comment 11 Zdenek Kabelac 2014-09-17 13:32:40 UTC
So back to comment 8 -  problem goes around the fact that clvmd currently does not maintain locks for non-clustered LVs - the code is just skipped.

This lead to the problem that converted VG with active volumes was presented on the local node with 'active' LV - but they were active without holding and lock - thus remote nodes did not see proper lock status for such LV and basically nothing really worked.

2 upstream commits should renable support for conversion of VG with active LVs:

https://www.redhat.com/archives/lvm-devel/2014-September/msg00102.html
https://www.redhat.com/archives/lvm-devel/2014-September/msg00103.html

They just check if the LV is active and reactive them with local exclusive lock. If something fails - it's reported as an error and user has to manually fix problems.

The 'locking_type = 0' goes around all security checks and allows user to change this clustering attribute in no-matter-how-broken cluster is....

Comment 13 Corey Marthaler 2015-04-27 23:03:57 UTC
Fix verified in the latest rpms.

2.6.32-554.el6.x86_64
lvm2-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
lvm2-libs-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
lvm2-cluster-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
udev-147-2.61.el6    BUILT: Mon Mar  2 05:08:11 CST 2015
device-mapper-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-libs-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-event-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-event-libs-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 08:43:06 CDT 2014
cmirror-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015


[root@host-110 ~]# vgcreate vg /dev/sd[abcdefgh]1
 Physical volume "/dev/sda1" successfully created
 Physical volume "/dev/sdb1" successfully created
 Physical volume "/dev/sdc1" successfully created
 Physical volume "/dev/sdd1" successfully created
 Physical volume "/dev/sde1" successfully created
 Physical volume "/dev/sdf1" successfully created
 Physical volume "/dev/sdg1" successfully created
 Physical volume "/dev/sdh1" successfully created
 Clustered volume group "vg" successfully created
[root@host-110 ~]# lvcreate -l1 -m1 --type raid1 vg
 Logical volume "lvol0" created.
[root@host-110 ~]# lvcreate -l1 -s vg/lvol0
 Logical volume "lvol1" created.
[root@host-110 ~]# lvcreate -l1 -s vg/lvol0
 Logical volume "lvol2" created.
[root@host-110 ~]# lvcreate -l1 -s vg/lvol0
 Logical volume "lvol3" created.
[root@host-110 ~]# lvrename vg/lvol1 vg/lvol1B
 Renamed "lvol1" to "lvol1B" in volume group "vg"

[root@host-110 ~]# lvs -a -o name,vg_name,attr,size,origin,layout,role vg
 LV               VG Attr       LSize Origin Layout     Role
 lvol0            vg owi-a-r--- 4.00m        raid,raid1 public,origin,thickorigin,multithickorigin
 [lvol0_rimage_0] vg iwi-aor--- 4.00m        linear     private,raid,image
 [lvol0_rimage_1] vg iwi-aor--- 4.00m        linear     private,raid,image
 [lvol0_rmeta_0]  vg ewi-aor--- 4.00m        linear     private,raid,metadata
 [lvol0_rmeta_1]  vg ewi-aor--- 4.00m        linear     private,raid,metadata
 lvol1B           vg swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot
 lvol2            vg swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot
 lvol3            vg swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot

[root@host-110 ~]# lvrename vg/lvol1B vg/lvol1
 Renamed "lvol1B" to "lvol1" in volume group "vg"

[root@host-110 ~]# lvs -a -o name,vg_name,attr,size,origin,layout,role vg
 LV               VG Attr       LSize Origin Layout     Role
 lvol0            vg owi-a-r--- 4.00m        raid,raid1 public,origin,thickorigin,multithickorigin
 [lvol0_rimage_0] vg iwi-aor--- 4.00m        linear     private,raid,image
 [lvol0_rimage_1] vg iwi-aor--- 4.00m        linear     private,raid,image
 [lvol0_rmeta_0]  vg ewi-aor--- 4.00m        linear     private,raid,metadata
 [lvol0_rmeta_1]  vg ewi-aor--- 4.00m        linear     private,raid,metadata
 lvol1            vg swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot
 lvol2            vg swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot
 lvol3            vg swi-a-s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot
[root@host-110 ~]# vgchange -an vg
 0 logical volume(s) in volume group "vg" now active

[root@host-110 ~]# lvs -a -o name,vg_name,attr,size,origin,layout,role vg
 LV               VG Attr       LSize Origin Layout     Role
 lvol0            vg owi---r--- 4.00m        raid,raid1 public,origin,thickorigin,multithickorigin
 [lvol0_rimage_0] vg Iwi---r--- 4.00m        linear     private,raid,image
 [lvol0_rimage_1] vg Iwi---r--- 4.00m        linear     private,raid,image
 [lvol0_rmeta_0]  vg ewi---r--- 4.00m        linear     private,raid,metadata
 [lvol0_rmeta_1]  vg ewi---r--- 4.00m        linear     private,raid,metadata
 lvol1            vg swi---s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot
 lvol2            vg swi---s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot
 lvol3            vg swi---s--- 4.00m lvol0  linear     public,snapshot,thicksnapshot

[root@host-110 ~]# vgchange -cn vg
 Volume group "vg" successfully changed

Comment 14 errata-xmlrpc 2015-07-22 07:35:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1411.html


Note You need to log in before you can comment on or make changes to this bug.