Bug 490417 - When setting max_lv for VG, LV creation can fail and produce invalid metadata
Summary: When setting max_lv for VG, LV creation can fail and produce invalid metadata
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: lvm2
Version: 4.8
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Milan Broz
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-03-16 09:52 UTC by Milan Broz
Modified: 2013-03-01 04:07 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 490298
Environment:
Last Closed: 2009-05-18 20:10:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2009:0967 0 normal SHIPPED_LIVE lvm2 bug-fix and enhancement update 2009-05-18 13:33:39 UTC

Description Milan Broz 2009-03-16 09:52:52 UTC
+++ This bug was initially created as a clone of Bug #490298 +++
RHEL4 clone

Description of problem:

1) Internal VG lv_count is inconsistent  is some cases and leads to problems.

# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created

# vgcreate -l 4 vg_test /dev/sdc
  Volume group "vg_test" successfully created

# lvcreate -L 12M -n lv1 vg_test
  Logical volume "lv1" created

# lvcreate -s -L 12M -n lv1s vg_test/lv1
  Logical volume "lv1s" created

# lvcreate -L 12M -n lv2 vg_test
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group for uuid not found: SilYJpDvKDfx2PxlTQ7Ww0bF2Vrte0qRNRtE2LUe9OaHop7J9GPU5CpZ7L3dZ24A
  Aborting. Failed to activate new LV to wipe the start of it.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group for uuid not found: SilYJpDvKDfx2PxlTQ7Ww0bF2Vrte0qRNRtE2LUe9OaHop7J9GPU5CpZ7L3dZ24A
  Unable to deactivate failed new LV. Manual intervention required.

# vgs
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Maximum number of logical volumes (4) reached in volume group vg_test
  Couldn't read all logical volumes for volume group vg_test.
  Volume group "vg_test" not found

vg_test is now lost!

2) Moreover, system allows vgcfgrestore with max_lv set to lower value than
actual count of LV. After that, it ignores max_lv completely:

# vgcfgrestore -f vg_test vg_test # with max_lv set to 1
  Restored volume group vg_test

# vgs -o +lv_count,max_lv vg_test
  VG      #PV #LV #SN Attr   VSize   VFree   #LV MaxLV
  vg_test   1   2   0 wz--n- 298.09G 298.07G   2     1

# lvcreate -L 12M -n lv2 vg_test
  Logical volume "lv2" created

Comment 3 Milan Broz 2009-03-24 21:19:28 UTC
Fixed in lvm2-2.02.42-5.el4

Comment 6 Corey Marthaler 2009-04-21 15:02:54 UTC
Fix verified in lvm2-2.02.42-5.el4.

[root@grant-01 ~]# lvcreate -L 12M -n lv5 grant
  Maximum number of logical volumes (4) reached in volume group grant

Comment 8 errata-xmlrpc 2009-05-18 20:10:26 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-0967.html


Note You need to log in before you can comment on or make changes to this bug.