Bug 1375526

Summary: Kill rpc.statd on Linux machines
Product: [Community] GlusterFS Reporter: Nigel Babu <nigelb>
Component: testsAssignee: Nigel Babu <nigelb>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, ndevos
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.10.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-06 17:26:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nigel Babu 2016-09-13 10:21:13 UTC
In our test harness we run this:

https://github.com/gluster/glusterfs/blob/master/tests/include.rc#L467
test x"$OSTYPE" = x"NetBSD" && pkill -9 perfused rpc.statd || true

Since rpc.statd is not killed on Linux machines, it leads to /var/messages filling up with messages like this:

Sep 11 04:20:42 slave33 sm-notify[16681]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16684]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16684]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16689]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16689]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16692]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16692]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16695]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16695]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16698]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16698]: Already notifying clients; Exiting!

Eventually, the disk is full and tests fail.

The line should be modified to be something like this:
pkill -9 perfused rpc.statd || true

See bug 1375521 for an instance for this happening on our test nodes.

Comment 1 Worker Ant 2016-09-13 10:30:30 UTC
REVIEW: http://review.gluster.org/15485 (Kill rpc.statd on tests in Linux as well) posted (#1) for review on master by Nigel Babu (nigelb)

Comment 2 Worker Ant 2016-09-13 10:44:24 UTC
REVIEW: http://review.gluster.org/15485 (Kill rpc.statd on tests in Linux as well) posted (#2) for review on master by Nigel Babu (nigelb)

Comment 3 Worker Ant 2016-09-13 12:41:18 UTC
REVIEW: http://review.gluster.org/15485 (tests: Kill rpc.statd on tests in Linux as well) posted (#3) for review on master by Nigel Babu (nigelb)

Comment 4 Worker Ant 2016-09-13 12:59:26 UTC
REVIEW: http://review.gluster.org/15485 (tests: Kill rpc.statd on tests in Linux as well) posted (#4) for review on master by Nigel Babu (nigelb)

Comment 5 Worker Ant 2016-09-15 06:15:49 UTC
COMMIT: http://review.gluster.org/15485 committed in master by Niels de Vos (ndevos) 
------
commit a046e4d5bbd2ee756ff6fdb7aa1aca115002b133
Author: Nigel Babu <nigelb>
Date:   Tue Sep 13 15:54:48 2016 +0530

    tests: Kill rpc.statd on tests in Linux as well
    
    The lack of this causes the /var/messages file on Linux test nodes to be filled
    up and cause space issues.
    
    Change-Id: I4c741c34de7f584859d1c62bdfda44a3d79c7ecc
    BUG: 1375526
    Signed-off-by: Nigel Babu <nigelb>
    Reviewed-on: http://review.gluster.org/15485
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Niels de Vos <ndevos>

Comment 6 Shyamsundar 2017-03-06 17:26:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/