Bug 1271999

Summary: After upgrading to RHGS 3.1.2 build, the other peer was shown as disconnected
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: glusterdAssignee: Gaurav Kumar Garg <ggarg>
Status: CLOSED ERRATA QA Contact: Byreddy <bsrirama>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: asrivast, bsrirama, ggarg, nlevinki, rgowdapp, sankarshan, smohan, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.2   
Hardware: x86_64   
OS: Linux   
Whiteboard: glusterd
Fixed In Version: glusterfs-3.7.5-5 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
RHEL-6.7 RHEL-7.1
Last Closed: 2016-03-01 05:41:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1260783    
Attachments:
Description Flags
glusterd log file from the node1 none

Description SATHEESARAN 2015-10-15 09:14:42 UTC
Description of problem:
-----------------------
After upgrading one of the node in the 2 node cluster, peer was shown as disconnected

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
glusterfs-3.7.5-0.2.el6rhs

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Install RHGS 3.1.1
2. Create a 2 node cluster
3. Upgrade to RHGS 3.1.2 ( glusterfs-3.7.5-0.2.el6rhs )

Actual results:
---------------
'gluster peer status' shows that the other peer was disconnected

Expected results:
-----------------
All the peers should be in connected state post upgrade


Additional info:

Comment 2 SATHEESARAN 2015-10-15 09:17:54 UTC
Created attachment 1083199 [details]
glusterd log file from the node1

Comment 5 Byreddy 2015-10-16 04:57:48 UTC
Repeated the above steps on rhel7 with rhgs version - glusterfs-3.7.5-0.3.el7, here also peer shows disconnected after update of one of two node cluster. 

and it's showing connected only after updating other node in the cluser.

Comment 7 Gaurav Kumar Garg 2015-10-16 09:47:05 UTC
downstream patch for this bug available https://code.engineering.redhat.com/gerrit/59549

Comment 8 Gaurav Kumar Garg 2015-10-16 09:48:42 UTC
When user upgrade from RHGS-3.1.1 to RHGS-3.1.2 then on updated node peer is going to DISCONNECTED state.

This is because of we was having insecure port enable by default in RHGS-3.1.2 and insecure port was disabled by defalut in RHGS-3.1.1. So if updated node (RHGS-3.1.2) send rpc request with (insecure port) to RHGS-3.1.1 node then RHGS-3.1.1 will not accept insecure port request, consequence is peer in RHGS-3.1.2 node move to DISCONNECTED state.

commit 243a5b429f225acb8e7132264fe0a0835ff013d5 set by default insecure port enable in RHGS-3.1.2

Fix is to reverts commit 243a5b429f225acb8e7132264fe0a0835ff013d5 

Reason for reverting this changes because of we don't want to have workaround to solve above problem for minor release.

Comment 10 Byreddy 2015-10-30 09:17:57 UTC
This Bug verified with the Build (glusterfs-3.7.5-5),

It's working fine, No more seeing the issue reported in the Description section.

Moving this bug to next state (Verified state)

Comment 12 errata-xmlrpc 2016-03-01 05:41:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html