Bug 1199936 - readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed when NFS is disabled
Summary: readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed when NFS is ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.6.2
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Nagaprasad Sathyanarayana
QA Contact:
URL:
Whiteboard:
Depends On: 1199944
Blocks: glusterfs-3.6.3
TreeView+ depends on / blocked
 
Reported: 2015-03-09 10:10 UTC by uli
Modified: 2016-02-18 00:21 UTC (History)
7 users (show)

Fixed In Version: glusterfs-v3.6.3
Clone Of:
: 1199944 (view as bug list)
Environment:
Last Closed: 2016-02-04 15:20:35 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description uli 2015-03-09 10:10:14 UTC
Description of problem:
Every 3 seconds there is the following warning which gets logged:

[2015-03-08 13:22:36.383715] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed (Invalid argument)

Version-Release number of selected component (if applicable):
GlusterFS 3.6.2

How reproducible:
Simply disable NFS on all your volumes


Steps to Reproduce:
1. disable NFS on your volumes using "gluster volume set volname nfs.disable on"
2.
3.

Actual results:
[2015-03-08 13:22:36.383715] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed (Invalid argument)

Expected results:
No warning in the log file

Additional info:
see http://www.gluster.org/pipermail/gluster-users/2015-March/020964.html

Comment 1 nico-redhat-bugzilla 2015-03-09 10:16:03 UTC
Having the same issue, it fills up the logfiles quickly & adds quite a lot verbosity, which makes it hard to detect the "real" problems.

Just my 2 Rappen

Comment 2 krishnan parthasarathi 2015-03-09 10:37:30 UTC
Root cause analysis
-------------------

glusterd would reconfigure node level services like gluster-nfs when volume options are modified via volume-set or volume-reset commands. It is important to know that glusterd maintains a unix domain socket connection with all the daemon that it manages. 

When a user disables NFS access to a GlusterFS volume using "nfs.disable" option, the gluster-nfs process is reconfigured/restarted. glusterd tries to connect to the unix domain socket corresponding to gluster-nfs process. In this case, it's likely that the user just disabled NFS access to the last volume in the cluster. This implies gluster-nfs daemon wouldn't be running. At this point gluster-nfs daemon's volfile wouldn't contain any volume in it and therefore it shuts itself down. But glusterd has gone past the point where it could have and avoided spawning it or connecting to it. This results in the log messages as mentioned in bug synopsis. The log messages repeat because our rpc implementation attempts a reconnect once in 3 seconds by default.

This problem may also be observed when a user restarts her glusterd possibly after a software upgrade.

Comment 3 Anand Avati 2015-03-10 05:02:19 UTC
REVIEW: http://review.gluster.org/9843 (glusterd: don't start gluster-nfs when NFS is disabled) posted (#1) for review on release-3.6 by Krishnan Parthasarathi (kparthas)

Comment 4 Anatoly Pugachev 2015-03-10 13:07:18 UTC
duplicate of #847821 ( https://bugzilla.redhat.com/show_bug.cgi?id=847821 ) ?

Comment 5 Anand Avati 2015-03-25 08:54:12 UTC
COMMIT: http://review.gluster.org/9843 committed in release-3.6 by Raghavendra Bhat (raghavendra) 
------
commit 54a725694c47ce59f700c57424e77f9c13244460
Author: Krishnan Parthasarathi <kparthas>
Date:   Tue Mar 10 10:31:14 2015 +0530

    glusterd: don't start gluster-nfs when NFS is disabled
    
    Backport of http://review.gluster.org/9835
    
    Change-Id: Iff9c8e8d2233048f3e5c9ee8b5af38ba10193cb9
    BUG: 1199936
    Signed-off-by: Krishnan Parthasarathi <kparthas>
    Reviewed-on: http://review.gluster.org/9843
    Reviewed-by: Atin Mukherjee <amukherj>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Kaleb KEITHLEY <kkeithle>
    Reviewed-by: Raghavendra Bhat <raghavendra>

Comment 7 Kaushal 2016-02-04 15:20:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v3.6.3, please open a new bug report.

glusterfs-v3.6.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2015-April/021669.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.