Bug 1199944 - readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed when NFS is disabled
Summary: readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed when NFS is ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: krishnan parthasarathi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1199936
TreeView+ depends on / blocked
 
Reported: 2015-03-09 10:38 UTC by krishnan parthasarathi
Modified: 2015-11-03 23:06 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1199936
Environment:
Last Closed: 2015-05-14 17:29:18 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description krishnan parthasarathi 2015-03-09 10:38:31 UTC
+++ This bug was initially created as a clone of Bug #1199936 +++

Description of problem:
Every 3 seconds there is the following warning which gets logged:

[2015-03-08 13:22:36.383715] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed (Invalid argument)

Version-Release number of selected component (if applicable):
GlusterFS 3.6.2

How reproducible:
Simply disable NFS on all your volumes


Steps to Reproduce:
1. disable NFS on your volumes using "gluster volume set volname nfs.disable on"
2.
3.

Actual results:
[2015-03-08 13:22:36.383715] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed (Invalid argument)

Expected results:
No warning in the log file

Additional info:
see http://www.gluster.org/pipermail/gluster-users/2015-March/020964.html

--- Additional comment from  on 2015-03-09 06:16:03 EDT ---

Having the same issue, it fills up the logfiles quickly & adds quite a lot verbosity, which makes it hard to detect the "real" problems.

Just my 2 Rappen

--- Additional comment from krishnan parthasarathi on 2015-03-09 06:37:30 EDT ---

Root cause analysis
-------------------

glusterd would reconfigure node level services like gluster-nfs when volume options are modified via volume-set or volume-reset commands. It is important to know that glusterd maintains a unix domain socket connection with all the daemon that it manages. 

When a user disables NFS access to a GlusterFS volume using "nfs.disable" option, the gluster-nfs process is reconfigured/restarted. glusterd tries to connect to the unix domain socket corresponding to gluster-nfs process. In this case, it's likely that the user just disabled NFS access to the last volume in the cluster. This implies gluster-nfs daemon wouldn't be running. At this point gluster-nfs daemon's volfile wouldn't contain any volume in it and therefore it shuts itself down. But glusterd has gone past the point where it could have and avoided spawning it or connecting to it. This results in the log messages as mentioned in bug synopsis. The log messages repeat because our rpc implementation attempts a reconnect once in 3 seconds by default.

This problem may also be observed when a user restarts her glusterd possibly after a software upgrade.

Comment 1 Anand Avati 2015-03-09 10:41:18 UTC
REVIEW: http://review.gluster.org/9835 (glusterd: don't start gluster-nfs when NFS is disabled) posted (#1) for review on master by Krishnan Parthasarathi (kparthas)

Comment 2 Anand Avati 2015-03-09 18:05:29 UTC
COMMIT: http://review.gluster.org/9835 committed in master by Vijay Bellur (vbellur) 
------
commit e99f9d3408e44c0ec12488662c9491be7da1f1fe
Author: Krishnan Parthasarathi <kparthas>
Date:   Mon Mar 9 15:53:53 2015 +0530

    glusterd: don't start gluster-nfs when NFS is disabled
    
    Change-Id: Ic4da2a467a95af7108ed67954f44341131b41c7b
    BUG: 1199944
    Signed-off-by: Krishnan Parthasarathi <kparthas>
    Reviewed-on: http://review.gluster.org/9835
    Reviewed-by: Niels de Vos <ndevos>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 3 Anand Avati 2015-03-10 14:08:32 UTC
REVIEW: http://review.gluster.org/9851 (glusterd: create nfs volfile even when NFS is disabled on all volumes) posted (#1) for review on master by Krishnan Parthasarathi (kparthas)

Comment 4 Anand Avati 2015-03-10 14:47:02 UTC
REVIEW: http://review.gluster.org/9851 (glusterd: create nfs volfile even when NFS is disabled on all volumes) posted (#2) for review on master by Krishnan Parthasarathi (kparthas)

Comment 5 Anand Avati 2015-03-11 04:47:52 UTC
REVIEW: http://review.gluster.org/9851 (glusterd: create nfs volfile even when NFS is disabled on all volumes) posted (#3) for review on master by Krishnan Parthasarathi (kparthas)

Comment 6 Anand Avati 2015-03-11 13:59:17 UTC
COMMIT: http://review.gluster.org/9851 committed in master by Vijay Bellur (vbellur) 
------
commit 381abb5bd2b09a4c40b20ddbe6d385f9a849e384
Author: Krishnan Parthasarathi <kparthas>
Date:   Tue Mar 10 19:27:50 2015 +0530

    glusterd: create nfs volfile even when NFS is disabled on all volumes
    
    This is required to determine if gluster-nfs daemon needs to be
    restarted. With http://review.gluster.org/9835 gluster-nfs volfile
    wouldn't be created if all volumes had nfs disabled before they were
    started even once. With the existing code, we wouldn't be able to
    determine if gluster-nfs needs to be restarted or reconfigured without
    the gluster-nfs volfile. This fix is ensure that we generate the
    gluster-nfs volfile even if it wouldn't be started, to honour the above
    requirement.
    
    Change-Id: I86c6707870d838b03dd4d14b91b984cb43c33006
    BUG: 1199944
    Signed-off-by: Krishnan Parthasarathi <kparthas>
    Reviewed-on: http://review.gluster.org/9851
    Reviewed-by: Niels de Vos <ndevos>
    Reviewed-by: Atin Mukherjee <amukherj>
    Reviewed-by: Kaushal M <kaushal>
    Tested-by: Kaushal M <kaushal>
    Tested-by: Gluster Build System <jenkins.com>

Comment 7 Niels de Vos 2015-05-14 17:29:18 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:35:53 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 9 Niels de Vos 2015-05-14 17:38:15 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:46:25 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.