Description of problem: If periodically running vgscan in cluster (without any other loc operation), memory consupmtion for clvmd increase and can cause OOM. Caused by missing dm_pool_empty() call in VG context refresh path. (possible workaround should be try to [re]activate some clustered LV, this should flush the alocated pool data) Version-Release number of selected component (if applicable): lvm2-cluster-2.02.40-7.el5 How reproducible: just run while :; do vgscan; done and watch while :; do ps wax -o command,rss,vsz | grep [c]lv ; sleep 5 ; done
Created attachment 333853 [details] Proposed patch
Created attachment 334132 [details] Proposed patch Also cover remote metadata backup command which has the same problem.
Patch is in upstream cvs (for lvm2 2.02.46), will be included in RHEL5.4
Created attachment 335687 [details] Milan's patch backported to 2.02.37 2.02.37 doesn't have the vg_read_internal etc. changes. Patch is just a rediff for that version.
My customer needs this for 4.7.z. Is this possible to get into this or 4.8? Should I create a new BZ for that release? One of the customers requesting this is Hilti.
Fix in version lvm2-cluster-2.02.46-1.el5.
Marking verified. Didn't notice an increase after countless vgscan ops. clvmd -T20 431464 519336 clvmd -T20 431464 519336 clvmd -T20 431464 519336 clvmd -T20 431464 519336 lvm2-2.02.46-5.el5
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2009-1394.html