Bug 1305119

Summary: corosync memory footprint increases on every node rejoin
Product: Red Hat Enterprise Linux 6 Reporter: Jaroslav Kortus <jkortus>
Component: corosyncAssignee: Jan Friesse <jfriesse>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: high Docs Contact:
Priority: medium    
Version: 6.8CC: ccaulfie, cluster-maint, jkortus, jruemker
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: corosync-1.4.7-5.el6 Doc Type: Bug Fix
Doc Text:
Cause: User rejoins node. Consequence: Some buffers in corosync are not freed so memory consumption grows. Fix: Make sure all buffers are fixed. Result: No memory is leaked.
Story Points: ---
Clone Of:
: 1306349 (view as bug list) Environment:
Last Closed: 2016-05-10 19:43:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1306349    
Attachments:
Description Flags
cluster logs (crm_report)
none
Proposed patch none

Description Jaroslav Kortus 2016-02-05 17:46:12 UTC
Description of problem:
2-node cluster, run service cman restart on one node, watch corosync mem consumption (RSS) on the other.

Eventually, OOM killer is invoked. It happened quite early on our virts, as they do not have that much of RAM.

Version-Release number of selected component (if applicable):
corosync-1.4.7-4.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1. start 2-node cluster
2. run service cman restart in a loop on one node
3. watch RSS size of corosync on the other node (cat /proc/`pgrep corosync`/status | grep RSS)
4. in my condition the number jumps up by about 2M on every cycle

Actual results:
memory footprint increasing, OOM invoked eventually
Feb  5 17:04:25 virt-002 kernel: Out of memory: Kill process 5699 (corosync) score 412 or sacrifice child
Feb  5 17:04:25 virt-002 kernel: Killed process 5699, UID 0, (corosync) total-vm:1088500kB, anon-rss:776288kB, file-rss:44144kB


Expected results:
the memory footprint should be almost the same throughout many cycles of rejoining.

Additional info:

Comment 2 Jan Friesse 2016-02-08 07:00:20 UTC
Jaroslav,
can you please attach config file and corosync.log from both nodes? Is this behavior new in 1.4.7-4 or it was also in 1.4.7-2? Can you try corosync without cman so we can reduce scope to corosync only (and not cman)?

Comment 3 Jaroslav Kortus 2016-02-08 13:31:34 UTC
Created attachment 1122175 [details]
cluster logs (crm_report)

Comment 4 Jaroslav Kortus 2016-02-08 13:34:10 UTC
I've attached crm_report, which I hope has all useful info in one package.
The same behaviour can be observed using corosync-1.4.7-2.el6.x86_64 (RHEL 6.7).

Diff after 10 iterations (1.4.7-2):
VmRSS:     59844 kB
VmRSS:     80404 kB

Comment 5 Jan Friesse 2016-02-08 15:37:37 UTC
Ok, so it looks like pretty minimal two node cluster.

Can you please try corosync without cman so we can reduce scope to corosync only (and not cman)?

Comment 6 Jan Friesse 2016-02-09 14:10:52 UTC
Also pcsd was hitting following glibc bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1102739

So maybe it's same problem.

Comment 7 Jan Friesse 2016-02-10 15:01:40 UTC
Created attachment 1122820 [details]
Proposed patch

totempg: Fix memory leak

Previously there were two free lists. One for operational and one for
transitional state. Because every node starts in transitional state and
always ends in the operational state, assembly was always put to normal
state free list and never in transitional free list, so new assembly
structure was always allocated after new node connected.

Solution is to have only one free list.

Comment 10 Jan Friesse 2016-02-19 08:01:19 UTC
*** Bug 1309809 has been marked as a duplicate of this bug. ***

Comment 13 errata-xmlrpc 2016-05-10 19:43:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0753.html