Bug 164198 - LV activation issues when attempting large numbers
LV activation issues when attempting large numbers
Status: CLOSED DUPLICATE of bug 164197
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: lvm2 (Show other bugs)
4.0
All Linux
low Severity medium
: ---
: ---
Assigned To: Alasdair Kergon
Cluster QE
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-07-25 17:15 EDT by Corey Marthaler
Modified: 2007-11-30 17:07 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-08-08 15:01:32 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2005-07-25 17:15:51 EDT
Description of problem:
I have 1500 LVs on my 5 node cluster and when I attempted to activate all of
them, only 432 were actually activated, the rest stayed inactive. Also, the
vgchange and lvchange cmds didn't report any errors like they should have.

[root@morph-01 ~]# lvchange -ay /dev/gfs/gfs1066
[root@morph-01 ~]# lvdisplay /dev/gfs/gfs1066
  --- Logical volume ---
  LV Name                /dev/gfs/gfs1066
  VG Name                gfs
  LV UUID                5wq2vk-n1H3-eflJ-PBYZ-ocWS-XhPQ-7vJYSF
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                620.00 MB
  Current LE             155
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
[root@morph-01 ~]# lvchange -ay /dev/gfs/gfs1066
[root@morph-01 ~]# echo $?
0


I then attempted this without clvmd and this would cause lvm to repeatedly grow
so large that oom-killer would start shooting processes.

Cpu(s): 21.7% us, 64.5% sy,  0.0% ni,  1.1% id, 12.7% wa,  0.0% hi,  0.0% si
Mem:   2074456k total,  2053192k used,    21264k free,      136k buffers
Swap:  4071852k total,     7776k used,  4064076k free,     7988k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3544 root      25   0 1953m 1.9g 4224 R 96.1 96.4  10:01.61 lvm


Version-Release number of selected component (if applicable):
[root@morph-05 lvm]# rpm -qa | grep lvm
system-config-lvm-0.9.24-1.0
lvm2-2.01.13-1.0.RHEL4
lvm2-cluster-2.01.09-4.0.RHEL4

[root@morph-05 lvm]# lvchange --version
  LVM version:     2.01.13 (2005-07-13)
  Library version: 1.01.03 (2005-06-13)
  Driver version:  4.4.0

How reproducible:
everytime

Steps to Reproduce:
1.  create 1500 lvs
2.  attempt to activate them
Comment 1 Jonathan Earl Brassow 2005-07-28 16:29:17 EDT
A little more detail:

LVM2 was used in its cluster capacity to create 1500 lvs on a VG that was not active.  After doing a 
'vgchange -ay', only 432 LVs were available.  'lvchange -ay' was tried - still 432 LVs.  Trying to activate 
an individual LV (lvchange -ay /dev/gfs/gfs1066) not only failed to activate it, but reported success.

clvmd was shutdown, and the locking_type was switched back to single machine.  This is when the lvm 
process grew out of control.
Comment 2 Christine Caulfield 2005-08-01 06:08:07 EDT
Reassign to agk as this doesn't seem to be cluster-related.
Comment 4 Alasdair Kergon 2005-08-08 15:06:43 EDT
See also bug 164197: I suspect the '432' limit may be related.
Comment 5 Alasdair Kergon 2005-08-08 15:18:46 EDT
Yes, fixing bug 164197 will get you past your '432' limit.

*** This bug has been marked as a duplicate of 164197 ***

Note You need to log in before you can comment on or make changes to this bug.