Hide Forgot
Description of problem: 2-node cluster, run service cman restart on one node, watch corosync mem consumption (RSS) on the other. Eventually, OOM killer is invoked. It happened quite early on our virts, as they do not have that much of RAM. Version-Release number of selected component (if applicable): corosync-1.4.7-4.el6.x86_64 How reproducible: always Steps to Reproduce: 1. start 2-node cluster 2. run service cman restart in a loop on one node 3. watch RSS size of corosync on the other node (cat /proc/`pgrep corosync`/status | grep RSS) 4. in my condition the number jumps up by about 2M on every cycle Actual results: memory footprint increasing, OOM invoked eventually Feb 5 17:04:25 virt-002 kernel: Out of memory: Kill process 5699 (corosync) score 412 or sacrifice child Feb 5 17:04:25 virt-002 kernel: Killed process 5699, UID 0, (corosync) total-vm:1088500kB, anon-rss:776288kB, file-rss:44144kB Expected results: the memory footprint should be almost the same throughout many cycles of rejoining. Additional info:
Jaroslav, can you please attach config file and corosync.log from both nodes? Is this behavior new in 1.4.7-4 or it was also in 1.4.7-2? Can you try corosync without cman so we can reduce scope to corosync only (and not cman)?
Created attachment 1122175 [details] cluster logs (crm_report)
I've attached crm_report, which I hope has all useful info in one package. The same behaviour can be observed using corosync-1.4.7-2.el6.x86_64 (RHEL 6.7). Diff after 10 iterations (1.4.7-2): VmRSS: 59844 kB VmRSS: 80404 kB
Ok, so it looks like pretty minimal two node cluster. Can you please try corosync without cman so we can reduce scope to corosync only (and not cman)?
Also pcsd was hitting following glibc bug: https://bugzilla.redhat.com/show_bug.cgi?id=1102739 So maybe it's same problem.
Created attachment 1122820 [details] Proposed patch totempg: Fix memory leak Previously there were two free lists. One for operational and one for transitional state. Because every node starts in transitional state and always ends in the operational state, assembly was always put to normal state free list and never in transitional free list, so new assembly structure was always allocated after new node connected. Solution is to have only one free list.
*** Bug 1309809 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0753.html