Bug 490298 - When setting max_lv for VG, LV creation can fail and produce invalid metadata
When setting max_lv for VG, LV creation can fail and produce invalid metadata
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: lvm2 (Show other bugs)
5.3
All Linux
high Severity high
: rc
: ---
Assigned To: Milan Broz
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2009-03-14 18:48 EDT by Milan Broz
Modified: 2013-02-28 23:07 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 490417 (view as bug list)
Environment:
Last Closed: 2009-09-02 07:56:30 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Milan Broz 2009-03-14 18:48:07 EDT
Description of problem:

1) Internal VG lv_count is inconsistent  is some cases and leads to problems.

# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created

# vgcreate -l 4 vg_test /dev/sdc
  Volume group "vg_test" successfully created

# lvcreate -L 12M -n lv1 vg_test
  Logical volume "lv1" created

# lvcreate -s -L 12M -n lv1s vg_test/lv1
  Logical volume "lv1s" created

# lvcreate -L 12M -n lv2 vg_test
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group for uuid not found: SilYJpDvKDfx2PxlTQ7Ww0bF2Vrte0qRNRtE2LUe9OaHop7J9GPU5CpZ7L3dZ24A
  Aborting. Failed to activate new LV to wipe the start of it.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group for uuid not found: SilYJpDvKDfx2PxlTQ7Ww0bF2Vrte0qRNRtE2LUe9OaHop7J9GPU5CpZ7L3dZ24A
  Unable to deactivate failed new LV. Manual intervention required.

# vgs
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group "vg_test" not found

vg_test is now lost!

2) Moreover, system allows vgcfgrestore with max_lv set to lower value than
actual count of LV. After that, it ignores max_lv completely:

# vgcfgrestore -f vg_test vg_test # with max_lv set to 1
  Restored volume group vg_test

# vgs -o +lv_count,max_lv vg_test
  VG      #PV #LV #SN Attr   VSize   VFree   #LV MaxLV
  vg_test   1   2   0 wz--n- 298.09G 298.07G   2     1

# lvcreate -L 12M -n lv2 vg_test
  Logical volume "lv2" created

Version-Release number of selected component (if applicable):
lvm2-2.02.40-6.el5.x86_64
Comment 2 Milan Broz 2009-05-19 07:00:55 EDT
The max_lv handling logic is fixed upstream now -> POST

User is not allowed to create new LV if LV count is above limit, but internally tools can temporarily violate this limit (e.g. during mirror conversion).

(Actually max_lv limit make no much sense with lvm2 metadata.)
Comment 3 Milan Broz 2009-05-21 05:22:19 EDT
Fix in version lvm2-2.02.46-1.el5.
Comment 5 Corey Marthaler 2009-07-01 12:48:57 EDT
Fix verified in lvm2-2.02.46-8.el5.

[root@grant-01 ~]# vgcreate -l 4 grant /dev/sd[bc][123]
  Volume group "grant" successfully created
[root@grant-01 ~]# lvcreate -L 12M -n lv1 grant
  Logical volume "lv1" created
[root@grant-01 ~]# lvcreate -s -L 12M -n lv1s grant/lv1
  Logical volume "lv1s" created
[root@grant-01 ~]# lvs -a -o +devices
  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%  Convert Devices        
  LogVol00 VolGroup00 -wi-ao 64.56G                                       /dev/sda2(0)   
  LogVol01 VolGroup00 -wi-ao  9.81G                                       /dev/sda2(2066)
  lv1      grant      owi-a- 12.00M                                       /dev/sdb1(0)   
  lv1s     grant      swi-a- 12.00M lv1      0.07                         /dev/sdb2(0)   
[root@grant-01 ~]# lvcreate -L 12M -n lv2 grant
  Logical volume "lv2" created
[root@grant-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree  
  VolGroup00   1   2   0 wz--n-  74.38G      0 
  grant        6   3   1 wz--n- 544.92G 544.89G
[root@grant-01 ~]# lvcreate -m 1 -n mirror -L 10M grant
  Rounding up size to full physical extent 12.00 MB
  Logical volume "mirror" created
[root@grant-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree  
  VolGroup00   1   2   0 wz--n-  74.38G      0 
  grant        6   4   1 wz--n- 544.92G 544.86G
[root@grant-01 ~]# lvcreate -L 12M -n lv2 grant
  Logical volume "lv2" already exists in volume group "grant"
[root@grant-01 ~]# lvcreate -L 12M -n lv3 grant
  Maximum number of logical volumes (4) reached in volume group grant
Comment 7 errata-xmlrpc 2009-09-02 07:56:30 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-1393.html

Note You need to log in before you can comment on or make changes to this bug.