Bug 493311 - pvdisplay of 1000 LUNs occupies large amounts of RAM, around 4GB, and a large precentage of CPU (60%)
pvdisplay of 1000 LUNs occupies large amounts of RAM, around 4GB, and a large...
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: lvm2 (Show other bugs)
All Linux
high Severity high
: rc
: ---
Assigned To: Milan Broz
Cluster QE
Depends On:
  Show dependency treegraph
Reported: 2009-04-01 07:43 EDT by Milan Broz
Modified: 2013-02-28 23:07 EST (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 488739
Last Closed: 2009-09-02 07:56:33 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Comment 1 Milan Broz 2009-04-01 07:47:49 EDT
Description of problem:

+++ This bug was initially created as a clone of Bug #488739 +++
RHEL5 clone

pvdisplay on RHEL 4.6 host running lvm2-2.02.37-3 on 250 luns with 4 paths each
occupies around 4GB of memory and 60% cpu thrashing applications and crashing

--- Additional comment from tao@redhat.com on 2009-03-05 10:59:47 EDT ---

Uploading tarball containing requested info. 
FYI, pvdisplay started around 2:00a.m. and top output was:
29751    1    0  175M             0      0:04     0:04     0:00    9.35%

/usr/sbin/pvdisplay -v /dev/cciss/c0d0p2 /dev/emcpowera /dev/emcpowerb

17381  18  191M CP:05            root

and then at 2:05
29751    1    0 3418M             0      3:14     2:41     0:33   50.45%

/usr/sbin/pvdisplay -v /dev/cciss/c0d0p2 /dev/emcpowera /dev/emcpowerb

17381  18 6177M CP:04            root

and at 02:07 swapd comes into picture - usedswap is 3905 :

29751    1    0 3614M             0      3:50     3:11     0:39   60.39%

/usr/sbin/pvdisplay -v /dev/cciss/c0d0p2 /dev/emcpowera /dev/emcpowerb

17381  18 7315M CP:02            root

     135    0    0    0K             0    583:39     0:00   583:39   

kswapd0                                                        1  15   

sleep            root

     133    0    0    0K             0    653:14     0:00   653:14   

kswapd2                                                        1  15   

sleep            root

--- Additional comment from mbroz@redhat.com on 2009-03-05 11:11:58 EDT ---

ok, this is the remaining part of lvm2 performance problems on big VGs -
manipulating with pool memory as I mentioned here
Comment 2 Milan Broz 2009-04-07 07:20:21 EDT
Two problems here causing the exhaustive memory consumption,
first is looping in PV re-reads, patch for review is posted here

Second issue is using global memory pool (proposed patches sent to the same
Comment 4 Milan Broz 2009-05-21 05:22:21 EDT
Fix in version lvm2-2.02.46-1.el5.
Comment 6 Corey Marthaler 2009-07-02 11:48:41 EDT
Just a note that this fix passes the lvm regression test suite, however we don't have 1000s of LUNs to test this actual issue.
Comment 7 Chris Ward 2009-07-03 14:28:37 EDT
~~ Attention - RHEL 5.4 Beta Released! ~~

RHEL 5.4 Beta has been released! There should be a fix present in the Beta release that addresses this particular request. Please test and report back results here, at your earliest convenience. RHEL 5.4 General Availability release is just around the corner!

If you encounter any issues while testing Beta, please describe the issues you have encountered and set the bug into NEED_INFO. If you encounter new issues, please clone this bug to open a new issue and request it be reviewed for inclusion in RHEL 5.4 or a later update, if it is not of urgent severity.

Please do not flip the bug status to VERIFIED. Only post your verification results, and if available, update Verified field with the appropriate value.

Questions can be posted to this bug or your customer or partner representative.
Comment 10 errata-xmlrpc 2009-09-02 07:56:33 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.