Bug 1459900 - Brick Multiplexing:Not cleaning up stale socket file is resulting in spamming glusterd logs with warnings of "got disconnect from stale rpc"
Summary: Brick Multiplexing:Not cleaning up stale socket file is resulting in spamming...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.3.0
Assignee: Atin Mukherjee
QA Contact: Vinayak Papnoi
URL:
Whiteboard:
Depends On: 1460225
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-06-08 13:43 UTC by Nag Pavan Chilakam
Modified: 2017-09-21 04:59 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.8.4-28
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1460225 (view as bug list)
Environment:
Last Closed: 2017-09-21 04:59:42 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Description Nag Pavan Chilakam 2017-06-08 13:43:12 UTC
Description of problem:
======================
On a brick mux setup, When a brick is killed and we end up start volume using volume force, then the stale socket file results in glusterd spamming with below message
[2017-06-08 13:36:09.699089] W [glusterd-handler.c:5678:__glusterd_brick_rpc_notify] 0-management: got disconnect from stale rpc on /rhs/brick31/test3_31




Version-Release number of selected component (if applicable):
========
3.8.4-27

How reproducible:
=========
always

Steps to Reproduce:
1. have brick mux enabled, create 30 volumes say v1..v30
2. kill b1(let us say base volume for the glusterfsd was v1)
3. now do a vol start force of say v25(not the base volume v1)
4. Now do a vol start force of all the volumes say for i in $(gluster v list);do gluster v start $i force;done
5. we can see that the glusterd log is spammed with below error here on for indefinite time, as the base volume glusterfsd socket file is stale and existing
(in my case the stale socket file is with base volume test3_31)

[2017-06-08 13:37:45.712948] W [glusterd-handler.c:5678:__glusterd_brick_rpc_notify] 0-management: got disconnect from stale rpc on /rhs/brick31/test3_31
[2017-06-08 13:37:48.713752] W [glusterd-handler.c:5678:__glusterd_brick_rpc_notify] 0-management: got disconnect from stale rpc on /rhs/brick31/test3_31
[2017-06-08 13:37:51.713870] W [glusterd-handler.c:5678:__glusterd_brick_rpc_notify] 0-management: got disconnect from stale rpc on /rhs/brick31/test3_31


Workaround
=======
delete the old stale socket file




#########################################################################
=======>taking note of glusterfsd before killing b1
[root@dhcp35-45 ~]# ps -ef|grep glusterfs
root      8676     1  0 18:32 ?        00:00:00 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id test3_31.10.70.35.45.rhs-brick31-test3_31 -p /var/lib/glusterd/vols/test3_31/run/10.70.35.45-rhs-brick31-test3_31.pid -S /var/run/gluster/244d1bc0f7f84308442802f1790e5265.socket --brick-name /rhs/brick31/test3_31 -l /var/log/glusterfs/bricks/rhs-brick31-test3_31.log --xlator-option *-posix.glusterd-uuid=93894369-d7f0-4785-9f3e-5d5d1b62fd04 --brick-port 49152 --xlator-option test3_31-server.listen-port=49152
root      9830     1  0 18:33 ?        00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/a2927f9fe0e45a8fcbfedd20795017cd.socket --xlator-option *replicate*.node-uuid=93894369-d7f0-4785-9f3e-5d5d1b62fd04
root      9965 23301  0 18:36 pts/1    00:00:00 grep --color=auto glusterfs
[root@dhcp35-45 ~]# 

=========>listing socket files on n1(one is brick and one is heal deamon)
[root@dhcp35-45 ~]# ls /var/run/gluster/*socket
/var/run/gluster/244d1bc0f7f84308442802f1790e5265.socket
/var/run/gluster/a2927f9fe0e45a8fcbfedd20795017cd.socket


====>killed brick and below is glusterd log
[2017-06-08 13:07:21.256731] W [socket.c:595:__socket_rwv] 0-management: readv on /var/run/gluster/244d1bc0f7f84308442802f1790e5265.socket failed (No data available)
[2017-06-08 13:07:21.257457] I [MSGID: 106005] [glusterd-handler.c:5685:__glusterd_brick_rpc_notify] 0-management: Brick 10.70.35.45:/rhs/brick31/test3_31 has disconnected from glusterd.



==> started volume test3_47 using force
[root@dhcp35-45 ~]# ls /var/run/gluster/*socket
/var/run/gluster/244d1bc0f7f84308442802f1790e5265.socket ===>old stale socket file
/var/run/gluster/8f02212ea4a6ad834a1851cc57834410.socket =====>new socket file
/var/run/gluster/a2927f9fe0e45a8fcbfedd20795017cd.socket
[root@dhcp35-45 ~]# 




[root@dhcp35-45 ~]# ps -ef|grep glusterfsd
root     10149     1  0 18:46 ?        00:00:00 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id test3_47.10.70.35.45.rhs-brick47-test3_47 -p /var/lib/glusterd/vols/test3_47/run/10.70.35.45-rhs-brick47-test3_47.pid -S /var/run/gluster/8f02212ea4a6ad834a1851cc57834410.socket --brick-name /rhs/brick47/test3_47 -l /var/log/glusterfs/bricks/rhs-brick47-test3_47.log --xlator-option *-posix.glusterd-uuid=93894369-d7f0-4785-9f3e-5d5d1b62fd04 --brick-port 49153 --xlator-option test3_47-server.listen-port=49153
root     10359 23301  0 18:51 pts/1    00:00:00 grep --color=auto glusterfsd
[root@dhcp35-45 ~]# 


[root@dhcp35-45 ~]# gluster v status test3_47 
Status of volume: test3_47
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.45:/rhs/brick47/test3_47     49153     0          Y       10149
Brick 10.70.35.130:/rhs/brick47/test3_47    49152     0          Y       32613
Brick 10.70.35.122:/rhs/brick47/test3_47    49152     0          Y       17089
Self-heal Daemon on localhost               N/A       N/A        Y       10169
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       25357
Self-heal Daemon on 10.70.35.122            N/A       N/A        Y       18127
Self-heal Daemon on 10.70.35.130            N/A       N/A        Y       1625 
 
Task Status of Volume test3_47
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-45 ~]# 
[root@dhcp35-45 ~]# ps -ef|grep glusterfsd
root     10149     1  0 18:46 ?        00:00:00 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id test3_47.10.70.35.45.rhs-brick47-test3_47 -p /var/lib/glusterd/vols/test3_47/run/10.70.35.45-rhs-brick47-test3_47.pid -S /var/run/gluster/8f02212ea4a6ad834a1851cc57834410.socket --brick-name /rhs/brick47/test3_47 -l /var/log/glusterfs/bricks/rhs-brick47-test3_47.log --xlator-option *-posix.glusterd-uuid=93894369-d7f0-4785-9f3e-5d5d1b62fd04 --brick-port 49153 --xlator-option test3_47-server.listen-port=49153
root     10459 23301  0 19:00 pts/1    00:00:00 grep --color=auto glusterfsd

Comment 2 Atin Mukherjee 2017-06-09 06:28:36 UTC
This only happens when the brick process was killed with SIGKILL, not SIGTERM. Here given the brick process was killed with SIGKILL signal the signal handler wasn't invoked and hence the further cleanup wasn't triggered due to which we ended up with a stale socket file and this is the reason we see a constant series of stale disconnect. I can actually convert the gf_log instance to gf_log_occasionally to avoid this flood.

Comment 3 Atin Mukherjee 2017-06-09 12:24:48 UTC
upstream patch : https://review.gluster.org/17499

Comment 5 Atin Mukherjee 2017-06-13 11:49:36 UTC
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/108915/

Comment 8 Vinayak Papnoi 2017-06-22 11:09:00 UTC
Build : 3.8.4-28

Followed the steps in the description. The stale socket file still exists. However, the glusterd spam messages have reduced (was every 3 seconds; now every 2 minutes) which is expected.

Hence, moving the bug to VERIFIED.

Comment 10 errata-xmlrpc 2017-09-21 04:59:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.