Bug 1420324
Summary: | [GSS] The bricks once disconnected not connects back if SSL is enabled | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Riyas Abdulrasak <rnalakka> |
Component: | core | Assignee: | Mohit Agrawal <moagrawa> |
Status: | CLOSED ERRATA | QA Contact: | Vivek Das <vdas> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | amukherj, bkunal, moagrawa, nbalacha, pierre-yves.goubet, rcyriac, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | RHGS 3.2.0 | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | ssl | ||
Fixed In Version: | glusterfs-3.8.4-2 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-03-23 06:04:54 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1351528 |
Description
Riyas Abdulrasak
2017-02-08 13:04:05 UTC
Hi, The issue is not reproducible on latest package(glusterfs-3.8.4-14.el6rhs.x86_64.rpm) version. I have to find out what patch has resolved the above issue. Regards Mohit Agrawal Hi Mohit, I have tested with the glusterfs-3.8.4-14.el6rhs.x86_64.rpm package and brick re-connection happens with the latest package. As discussed , the brick re-connect messages were not appearing in the older version. [2017-02-09 06:34:33.644139] I [socket.c:348:ssl_setup_connection] 0-testssl-client-0: peer CN = COMMONNAME [2017-02-09 06:34:33.644195] I [socket.c:351:ssl_setup_connection] 0-testssl-client-0: SSL verification succeeded (client: 10.65.6.27:24007) [2017-02-09 06:34:33.644818] I [MSGID: 114057] [client-handshake.c:1446:select_server_supported_programs] 0-testssl-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2017-02-09 06:34:33.645849] I [MSGID: 114046] [client-handshake.c:1222:client_setvolume_cbk] 0-testssl-client-0: Connected to testssl-client-0, attached to remote volume '/brick01/b01'. [2017-02-09 06:34:33.645872] I [MSGID: 114047] [client-handshake.c:1233:client_setvolume_cbk] 0-testssl-client-0: Server and Client lk-version numbers are not same, reopening the fds [2017-02-09 06:34:33.646333] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk] 0-testssl-client-0: Server lk version = 1 Regards Riyas Hi Mohit, This patch seems to address the issue: https://review.gluster.org/#/c/13554/ "For an encrypted connection, sockect_connect() used to launch socket_poller() in it's own thread (ON by default), even if the connect failed. This would cause two unrefs to be done on the transport, once in socket_poller() and once in socket_connect(), causing the transport to be freed and cleaned up. This would cause further reconnect attempts from failing as the transport wouldn't be available." Changes to rpc/rpc-transport/socket/src/socket.c file seems very small and safe. Could you build an hotfix? Regards, Pierre-Yves Hi Pieere,
Thanks for your analysis,I have not checked yet about the code change why it is not working in 3.1.3?
I was busy in some other bugzilla.In 3.2 release we have done so many changes specific to socket_poller code , I will share my analysis after check the code.
Specific to this patch as you have shared it is already merged in the release (3.7.9-12) that one you are using.
Below is the output from git log for the branch (3.1.3)
>>>>>>>>>>>>>>>>
commit f125bb78b5a2abb41dec011d2f4fd525cb57ec93
Author: Kaushal M <kaushal>
Date: Tue Mar 1 13:04:03 2016 +0530
socket: Launch socket_poller only if connect succeeded
Backport of 92abe07 from master
>>>>>>>>>>>>>>>>>
Regards
Mohit Agrawal
downstream patch : https://code.engineering.redhat.com/gerrit/85897 is already into rhgs-3.2.0. Moving the status to MODIFIED for now, I will be moving it to ON_QA once all the acks are in place. Followed the steps to reproduce and i am unable to reproduce the issue reported in the bug. Version ------- glusterfs-3.8.4-15 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |