Description of problem: ----------------------- After upgrading one of the node in the 2 node cluster, peer was shown as disconnected Version-Release number of selected component (if applicable): ------------------------------------------------------------- glusterfs-3.7.5-0.2.el6rhs How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Install RHGS 3.1.1 2. Create a 2 node cluster 3. Upgrade to RHGS 3.1.2 ( glusterfs-3.7.5-0.2.el6rhs ) Actual results: --------------- 'gluster peer status' shows that the other peer was disconnected Expected results: ----------------- All the peers should be in connected state post upgrade Additional info:
Created attachment 1083199 [details] glusterd log file from the node1
Repeated the above steps on rhel7 with rhgs version - glusterfs-3.7.5-0.3.el7, here also peer shows disconnected after update of one of two node cluster. and it's showing connected only after updating other node in the cluser.
downstream patch for this bug available https://code.engineering.redhat.com/gerrit/59549
When user upgrade from RHGS-3.1.1 to RHGS-3.1.2 then on updated node peer is going to DISCONNECTED state. This is because of we was having insecure port enable by default in RHGS-3.1.2 and insecure port was disabled by defalut in RHGS-3.1.1. So if updated node (RHGS-3.1.2) send rpc request with (insecure port) to RHGS-3.1.1 node then RHGS-3.1.1 will not accept insecure port request, consequence is peer in RHGS-3.1.2 node move to DISCONNECTED state. commit 243a5b429f225acb8e7132264fe0a0835ff013d5 set by default insecure port enable in RHGS-3.1.2 Fix is to reverts commit 243a5b429f225acb8e7132264fe0a0835ff013d5 Reason for reverting this changes because of we don't want to have workaround to solve above problem for minor release.
This Bug verified with the Build (glusterfs-3.7.5-5), It's working fine, No more seeing the issue reported in the Description section. Moving this bug to next state (Verified state)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html