Bug 1426032

Summary: Log message shows error code as success even when rpc fails to connect
Product: [Community] GlusterFS Reporter: Milind Changire <mchangir>
Component: rpcAssignee: Milind Changire <mchangir>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: amukherj, bugs, nbalacha, rhs-bugs, storage-qa-internal, vdas
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.11.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1387328
: 1451995 1452122 (view as bug list) Environment:
Last Closed: 2017-05-30 18:44:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1387328, 1451995, 1452122    

Comment 1 Milind Changire 2017-02-23 05:16:24 UTC
Description of problem:

Log message shows error code as success even when rpc fails to connect

[2016-10-20 14:27:14.474001] E [MSGID: 104024] [glfs-mgmt.c:735:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with remote-host: 10.70.46.212 (Success)

Version-Release number of selected component (if applicable):
samba-client-4.4.6-2.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64
Windows10

How reproducible:
Always

Steps to Reproduce:
1.Create a 4 node gluster cluster with ctdb
2.Enable volfile setup in smb.conf with unix+ socket (unix+/var/run/glusterd.socket tcp+<valid hostname>)
3.reload the smb config file (smbcontrol smbd reload-config)
4.kill glusterd in one of the server node
5.mount in windows using "net use" provide the ip of the server where glusterd is stopped

Actual results:
Success message on failure

Expected results:
No success message should come when error message is there

Comment 2 Worker Ant 2017-02-23 05:17:41 UTC
REVIEW: https://review.gluster.org/16730 (rpc: avoid logging success on failure) posted (#1) for review on master by Milind Changire (mchangir)

Comment 3 Worker Ant 2017-02-24 08:02:08 UTC
REVIEW: https://review.gluster.org/16730 (rpc: avoid logging success on failure) posted (#2) for review on master by Milind Changire (mchangir)

Comment 4 Worker Ant 2017-03-05 10:32:28 UTC
REVIEW: https://review.gluster.org/16730 (rpc: avoid logging success on failure) posted (#3) for review on master by Milind Changire (mchangir)

Comment 5 Worker Ant 2017-03-05 11:12:24 UTC
REVIEW: https://review.gluster.org/16730 (rpc: avoid logging success on failure) posted (#4) for review on master by Milind Changire (mchangir)

Comment 6 Worker Ant 2017-03-05 16:10:35 UTC
REVIEW: https://review.gluster.org/16730 (rpc: avoid logging success on failure) posted (#5) for review on master by Milind Changire (mchangir)

Comment 7 Worker Ant 2017-03-07 12:05:42 UTC
COMMIT: https://review.gluster.org/16730 committed in master by Jeff Darcy (jdarcy) 
------
commit 89c6bedc1c2e978f67ca29f212a357984cd8a2dd
Author: Milind Changire <mchangir>
Date:   Sun Mar 5 21:39:20 2017 +0530

    rpc: avoid logging success on failure
    
    Avoid logging Success in the event of failure especially when errno has
    no meaningful value w.r.t. the failure. In this case the errno is set to
    zero when there's indeed a failure at the RPC level.
    
    Change-Id: If2cc81aa1e590023ed22892dacbef7cac213e591
    BUG: 1426032
    Signed-off-by: Milind Changire <mchangir>
    Reviewed-on: https://review.gluster.org/16730
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: N Balachandran <nbalacha>
    Reviewed-by: Jeff Darcy <jdarcy>

Comment 8 Shyamsundar 2017-05-30 18:44:46 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/