Bug 490298 - When setting max_lv for VG, LV creation can fail and produce invalid metadata
Summary: When setting max_lv for VG, LV creation can fail and produce invalid metadata
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: lvm2
Version: 5.3
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Milan Broz
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-03-14 22:48 UTC by Milan Broz
Modified: 2013-03-01 04:07 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 490417 (view as bug list)
Environment:
Last Closed: 2009-09-02 11:56:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2009:1393 0 normal SHIPPED_LIVE lvm2 bug-fix and enhancement update 2009-09-01 12:00:22 UTC

Description Milan Broz 2009-03-14 22:48:07 UTC
Description of problem:

1) Internal VG lv_count is inconsistent  is some cases and leads to problems.

# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created

# vgcreate -l 4 vg_test /dev/sdc
  Volume group "vg_test" successfully created

# lvcreate -L 12M -n lv1 vg_test
  Logical volume "lv1" created

# lvcreate -s -L 12M -n lv1s vg_test/lv1
  Logical volume "lv1s" created

# lvcreate -L 12M -n lv2 vg_test
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group for uuid not found: SilYJpDvKDfx2PxlTQ7Ww0bF2Vrte0qRNRtE2LUe9OaHop7J9GPU5CpZ7L3dZ24A
  Aborting. Failed to activate new LV to wipe the start of it.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group for uuid not found: SilYJpDvKDfx2PxlTQ7Ww0bF2Vrte0qRNRtE2LUe9OaHop7J9GPU5CpZ7L3dZ24A
  Unable to deactivate failed new LV. Manual intervention required.

# vgs
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group "vg_test" not found

vg_test is now lost!

2) Moreover, system allows vgcfgrestore with max_lv set to lower value than
actual count of LV. After that, it ignores max_lv completely:

# vgcfgrestore -f vg_test vg_test # with max_lv set to 1
  Restored volume group vg_test

# vgs -o +lv_count,max_lv vg_test
  VG      #PV #LV #SN Attr   VSize   VFree   #LV MaxLV
  vg_test   1   2   0 wz--n- 298.09G 298.07G   2     1

# lvcreate -L 12M -n lv2 vg_test
  Logical volume "lv2" created

Version-Release number of selected component (if applicable):
lvm2-2.02.40-6.el5.x86_64

Comment 2 Milan Broz 2009-05-19 11:00:55 UTC
The max_lv handling logic is fixed upstream now -> POST

User is not allowed to create new LV if LV count is above limit, but internally tools can temporarily violate this limit (e.g. during mirror conversion).

(Actually max_lv limit make no much sense with lvm2 metadata.)

Comment 3 Milan Broz 2009-05-21 09:22:19 UTC
Fix in version lvm2-2.02.46-1.el5.

Comment 5 Corey Marthaler 2009-07-01 16:48:57 UTC
Fix verified in lvm2-2.02.46-8.el5.

[root@grant-01 ~]# vgcreate -l 4 grant /dev/sd[bc][123]
  Volume group "grant" successfully created
[root@grant-01 ~]# lvcreate -L 12M -n lv1 grant
  Logical volume "lv1" created
[root@grant-01 ~]# lvcreate -s -L 12M -n lv1s grant/lv1
  Logical volume "lv1s" created
[root@grant-01 ~]# lvs -a -o +devices
  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%  Convert Devices        
  LogVol00 VolGroup00 -wi-ao 64.56G                                       /dev/sda2(0)   
  LogVol01 VolGroup00 -wi-ao  9.81G                                       /dev/sda2(2066)
  lv1      grant      owi-a- 12.00M                                       /dev/sdb1(0)   
  lv1s     grant      swi-a- 12.00M lv1      0.07                         /dev/sdb2(0)   
[root@grant-01 ~]# lvcreate -L 12M -n lv2 grant
  Logical volume "lv2" created
[root@grant-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree  
  VolGroup00   1   2   0 wz--n-  74.38G      0 
  grant        6   3   1 wz--n- 544.92G 544.89G
[root@grant-01 ~]# lvcreate -m 1 -n mirror -L 10M grant
  Rounding up size to full physical extent 12.00 MB
  Logical volume "mirror" created
[root@grant-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree  
  VolGroup00   1   2   0 wz--n-  74.38G      0 
  grant        6   4   1 wz--n- 544.92G 544.86G
[root@grant-01 ~]# lvcreate -L 12M -n lv2 grant
  Logical volume "lv2" already exists in volume group "grant"
[root@grant-01 ~]# lvcreate -L 12M -n lv3 grant
  Maximum number of logical volumes (4) reached in volume group grant

Comment 7 errata-xmlrpc 2009-09-02 11:56:30 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-1393.html


Note You need to log in before you can comment on or make changes to this bug.