Bug 490417

Summary: When setting max_lv for VG, LV creation can fail and produce invalid metadata
Product: Red Hat Enterprise Linux 4 Reporter: Milan Broz <mbroz>
Component: lvm2Assignee: Milan Broz <mbroz>
Status: CLOSED ERRATA QA Contact: Cluster QE <mspqa-list>
Severity: high Docs Contact:
Priority: high    
Version: 4.8CC: agk, cmarthal, dwysocha, edamato, heinzm, jbrassow, mbroz, prockai, pvrabec
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 490298 Environment:
Last Closed: 2009-05-18 20:10:26 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Milan Broz 2009-03-16 09:52:52 UTC
+++ This bug was initially created as a clone of Bug #490298 +++
RHEL4 clone

Description of problem:

1) Internal VG lv_count is inconsistent  is some cases and leads to problems.

# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created

# vgcreate -l 4 vg_test /dev/sdc
  Volume group "vg_test" successfully created

# lvcreate -L 12M -n lv1 vg_test
  Logical volume "lv1" created

# lvcreate -s -L 12M -n lv1s vg_test/lv1
  Logical volume "lv1s" created

# lvcreate -L 12M -n lv2 vg_test
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group for uuid not found: SilYJpDvKDfx2PxlTQ7Ww0bF2Vrte0qRNRtE2LUe9OaHop7J9GPU5CpZ7L3dZ24A
  Aborting. Failed to activate new LV to wipe the start of it.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group for uuid not found: SilYJpDvKDfx2PxlTQ7Ww0bF2Vrte0qRNRtE2LUe9OaHop7J9GPU5CpZ7L3dZ24A
  Unable to deactivate failed new LV. Manual intervention required.

# vgs
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group "vg_test" not found

vg_test is now lost!

2) Moreover, system allows vgcfgrestore with max_lv set to lower value than
actual count of LV. After that, it ignores max_lv completely:

# vgcfgrestore -f vg_test vg_test # with max_lv set to 1
  Restored volume group vg_test

# vgs -o +lv_count,max_lv vg_test
  VG      #PV #LV #SN Attr   VSize   VFree   #LV MaxLV
  vg_test   1   2   0 wz--n- 298.09G 298.07G   2     1

# lvcreate -L 12M -n lv2 vg_test
  Logical volume "lv2" created

Comment 3 Milan Broz 2009-03-24 21:19:28 UTC
Fixed in lvm2-2.02.42-5.el4

Comment 6 Corey Marthaler 2009-04-21 15:02:54 UTC
Fix verified in lvm2-2.02.42-5.el4.

[root@grant-01 ~]# lvcreate -L 12M -n lv5 grant
  Maximum number of logical volumes (4) reached in volume group grant

Comment 8 errata-xmlrpc 2009-05-18 20:10:26 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-0967.html