Bug 1271999 - After upgrading to RHGS 3.1.2 build, the other peer was shown as disconnected
After upgrading to RHGS 3.1.2 build, the other peer was shown as disconnected
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.1.2
Assigned To: Gaurav Kumar Garg
: ZStream
Depends On:
Blocks: 1260783
  Show dependency treegraph
Reported: 2015-10-15 05:14 EDT by SATHEESARAN
Modified: 2016-06-05 19:38 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5-5
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
RHEL-6.7 RHEL-7.1
Last Closed: 2016-03-01 00:41:00 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
glusterd log file from the node1 (446.39 KB, text/plain)
2015-10-15 05:17 EDT, SATHEESARAN
no flags Details

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 05:20:36 EST

  None (edit)
Description SATHEESARAN 2015-10-15 05:14:42 EDT
Description of problem:
After upgrading one of the node in the 2 node cluster, peer was shown as disconnected

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Install RHGS 3.1.1
2. Create a 2 node cluster
3. Upgrade to RHGS 3.1.2 ( glusterfs-3.7.5-0.2.el6rhs )

Actual results:
'gluster peer status' shows that the other peer was disconnected

Expected results:
All the peers should be in connected state post upgrade

Additional info:
Comment 2 SATHEESARAN 2015-10-15 05:17 EDT
Created attachment 1083199 [details]
glusterd log file from the node1
Comment 5 Byreddy 2015-10-16 00:57:48 EDT
Repeated the above steps on rhel7 with rhgs version - glusterfs-3.7.5-0.3.el7, here also peer shows disconnected after update of one of two node cluster. 

and it's showing connected only after updating other node in the cluser.
Comment 7 Gaurav Kumar Garg 2015-10-16 05:47:05 EDT
downstream patch for this bug available https://code.engineering.redhat.com/gerrit/59549
Comment 8 Gaurav Kumar Garg 2015-10-16 05:48:42 EDT
When user upgrade from RHGS-3.1.1 to RHGS-3.1.2 then on updated node peer is going to DISCONNECTED state.

This is because of we was having insecure port enable by default in RHGS-3.1.2 and insecure port was disabled by defalut in RHGS-3.1.1. So if updated node (RHGS-3.1.2) send rpc request with (insecure port) to RHGS-3.1.1 node then RHGS-3.1.1 will not accept insecure port request, consequence is peer in RHGS-3.1.2 node move to DISCONNECTED state.

commit 243a5b429f225acb8e7132264fe0a0835ff013d5 set by default insecure port enable in RHGS-3.1.2

Fix is to reverts commit 243a5b429f225acb8e7132264fe0a0835ff013d5 

Reason for reverting this changes because of we don't want to have workaround to solve above problem for minor release.
Comment 10 Byreddy 2015-10-30 05:17:57 EDT
This Bug verified with the Build (glusterfs-3.7.5-5),

It's working fine, No more seeing the issue reported in the Description section.

Moving this bug to next state (Verified state)
Comment 12 errata-xmlrpc 2016-03-01 00:41:00 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.