Bug 1445001

Summary: corosync can get swapped out
Product: Red Hat Enterprise Linux 7 Reporter: Andrew Price <anprice>
Component: corosyncAssignee: Jan Friesse <jfriesse>
Status: CLOSED ERRATA QA Contact: Chris Mackowski <cmackows>
Severity: high Docs Contact:
Priority: high    
Version: 7.4CC: bugproxy, ccaulfie, cfeist, cluster-maint, cmackows, gfs2-maint, hannsj_uhl, jkachuck, jruemker, royoung, swhiteho
Target Milestone: rc   
Target Release: 7.4   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: corosync-2.4.0-9.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1448906 (view as bug list) Environment:
Last Closed: 2017-08-01 18:40:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1273401, 1299988, 1446211, 1448906    
Attachments:
Description Flags
Main: Call mlockall after fork none

Description Andrew Price 2017-04-24 17:29:15 UTC
Description of problem:

Despite its use of mlockall(MCL_CURRENT|MCL_FUTURE) corosync's pages can be swapped out, potentially causing stability issues. This could be related to the issues we're seeing on s390x (bug 1347782) but I'm not certain that it's the whole picture so I'm filing this separately.

Version-Release number of selected component (if applicable):

corosync-2.4.0-8.el7.s390x

Steps to Reproduce:
1. Bring up a cluster
2. Apply memory pressure
3. grep VmLck /proc/`pidof corosync`/task/*/status
4. grep VmSwap /proc/`pidof corosync`/task/*/status

Actual results:

[root@qe-c02-m01 ~]# grep VmLck /proc/`pidof corosync`/task/*/status
/proc/1786/task/1786/status:VmLck:             0 kB
/proc/1786/task/1787/status:VmLck:             0 kB
[root@qe-c02-m01 ~]# grep VmSwap /proc/`pidof corosync`/task/*/status
/proc/1786/task/1786/status:VmSwap:        13648 kB
/proc/1786/task/1787/status:VmSwap:        13648 kB

Expected results:

VmLck = (non-zero) kB
VmSwap = 0 kB

Additional info:

Using strace -f corosync I found that the PID calling mlockall() ended after the "main" corosync process is created, so it's possible that mlockall() needs to be called later, after forking.

Relevant snippets from strace:

23070 mlockall(MCL_CURRENT|MCL_FUTURE)  = 0
...
23070 clone( <unfinished ...>
23071 set_robust_list(0x3fffcedb7d0, 24) = 0
23070 <... clone resumed> child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTI
D|SIGCHLD, child_tidptr=0x3fffcedb7c0) = 23071
23071 setsid()                          = 23071
23071 open("/dev/null", O_RDWR)         = 5
23070 exit_group(0)                     = ?
...
23071 clone(child_stack=0x3fffbeb7220, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x3fffbeb79e0, tls=0x3fffbeb7910, child_tidptr=0x3fffbeb79e0) = 23072
23071 futex(0x3fffd36beac, FUTEX_WAIT_PRIVATE, 0, NULL <unfinished ...>
23072 set_robust_list(0x3fffbeb79f0, 24) = 0
23072 futex(0x3fffd36beac, FUTEX_WAKE_PRIVATE, 1) = 1
23071 <... futex resumed> )             = 0
23072 futex(0x3fffd36be8c, FUTEX_WAIT_PRIVATE, 0, NULL <unfinished ...>
23071 sched_setscheduler(23072, SCHED_RR, [99]) = 0
23071 open("/var/run/corosync.pid", O_WRONLY|O_CREAT, 0640) = 8
23071 fcntl(8, F_SETLK, {l_type=F_WRLCK, l_whence=SEEK_SET, l_start=0, l_len=0}) = 0
23071 ftruncate(8, 0)                   = 0
23071 write(8, "23071\n", 6)            = 6

Comment 1 Andrew Price 2017-04-24 18:23:51 UTC
Testing with the patch below, which calls mlockall after the first fork(), I'm seeing what I would expect:

[root@qe-c01-m01 ~]# grep VmLck /proc/`pidof corosync`/task/*/status
/proc/29170/task/29170/status:VmLck:      209688 kB
/proc/29170/task/29171/status:VmLck:      209688 kB
[root@qe-c01-m01 ~]# grep VmSwap /proc/`pidof corosync`/task/*/status
/proc/29170/task/29170/status:VmSwap:          0 kB
/proc/29170/task/29171/status:VmSwap:          0 kB

Note that the first fork() is done conditionally in corosync_tty_detach() so corosync -f shouldn't have this problem, and -f should not be used in reproducers.


diff --git a/exec/main.c b/exec/main.c
index 0922044..645153c 100644
--- a/exec/main.c
+++ b/exec/main.c
@@ -400,6 +400,8 @@ static void priv_drop (void)
        return; /* TODO: we are still not dropping privs */
 }
 
+static void corosync_mlockall (void);
+
 static void corosync_tty_detach (void)
 {
        int devnull;
@@ -424,7 +426,7 @@ static void corosync_tty_detach (void)
 
        /* Create new session */
        (void)setsid();
-
+       corosync_mlockall();
        /*
         * Map stdin/out/err to /dev/null.
         */

Comment 2 Christine Caulfield 2017-04-25 08:43:08 UTC
You're right. The mlockall() man page states

Memory locks are not inherited by a child created via fork(2)  and  are
automatically  removed  (unlocked)  during  an  execve(2)  or  when the
process terminates.

Comment 3 Andrew Price 2017-04-25 09:38:05 UTC
I've been running the s390x d_io tests against my quick and dirty patch from comment #1 and I haven't seen a corosync scheduling warning in 15+ hours. It would be good to get this one fixed for 7.4 since the stability gains with memory intensive workloads (like heavy buffered i/o) are pretty clear.

Comment 4 Christine Caulfield 2017-04-25 10:45:21 UTC
I'm tempted to mark this as a regression, but it hasn't worked since RHEL-5!

Comment 5 Christine Caulfield 2017-04-25 11:00:58 UTC
I don't know if there's any particular reason why mlockall is called so early in the startup, (it was later in RHEL-5 aisexec) but this patch works for me (including master branch with knet)

diff --git a/exec/main.c b/exec/main.c
index 0922044..3ea901d 100644
--- a/exec/main.c
+++ b/exec/main.c
@@ -1171,8 +1171,6 @@ int main (int argc, char **argv, char **envp)
                corosync_setscheduler ();
        }
 
-       corosync_mlockall ();
-
        /*
         * Other signals are registered later via qb_loop_signal_add
         */
@@ -1304,6 +1302,7 @@ int main (int argc, char **argv, char **envp)
        if (background) {
                corosync_tty_detach ();
        }
+       corosync_mlockall ();
 
        corosync_poll_handle = qb_loop_create ();

Comment 6 Andrew Price 2017-04-25 12:38:02 UTC
That looks more sensible to me.

Comment 7 Jan Friesse 2017-04-25 12:54:54 UTC
The reason why mlockall was moved is ed7d054e552b4cb2a0cb502b65f84310ce6da844 (Remove priority inversion in logsys.). And I can fully agree with that patch - logsys should have had same priority as main corosync process. But mlockall shouldn't had been moved and it was for sure some accident.

So good catch, nice patch, ACK and merged as https://github.com/corosync/corosync/commit/86012ebb45c85df277eb8fe5dc5223e2fdebf25c .

I believe it shouldn't be problem to get this into 7.4.

Comment 8 Jan Friesse 2017-04-25 12:56:23 UTC
Created attachment 1273899 [details]
Main: Call mlockall after fork

Main: Call mlockall after fork

Man page of mlockall is clear:
Memory locks are not inherited by a child created via fork(2) and are
automatically removed (unlocked) during an execve(2) or when the
process terminates.

So calling mlockall before corosync_tty_detach is noop when corosync is
executed as a daemon (corosync -f was not affected).

This regression is caused by ed7d054e552b4cb2a0cb502b65f84310ce6da844
(setprio for logsys/qblog was correct, mlockall was not).

Solution is to move corosync_mlockall call on correct place.

Signed-off-by: Andrew Price <anprice>
Reviewed-by: Christine Caulfield <ccaulfie>
Reviewed-by: Jan Friesse <jfriesse>
(cherry picked from commit 86012ebb45c85df277eb8fe5dc5223e2fdebf25c)

Comment 9 Jan Friesse 2017-04-25 13:13:08 UTC
For QE: This bug is not really "reproducible" so test is in Comment 1.

Comment 14 Chris Mackowski 2017-05-16 20:39:10 UTC
*** Bug 1347782 has been marked as a duplicate of this bug. ***

Comment 18 errata-xmlrpc 2017-08-01 18:40:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1999