Description of problem: There are two nodes lets say node1 and node2 which have ethernet(eth0) and RDMA(ib0) cards in the systems. User tries to do peer probe in the following fashion nodes get connected for a few seconds and then it goes to disconnected state. From Node1: -------------- -------------- gluster peer probe eth0(node2) gluster peer probe ib0(node2) From Node2: ----------------- ----------------- gluster peer probe ib0(node1) output from gluster peer status from Node2: --------------------------------------------------- --------------------------------------------------- gluster peer status Number of Peers: 1 Hostname: 10.70.36.48 Uuid: d739d055-15e2-4617-94d1-33c04bcf156b State: Peer in Cluster (Disconnected) Other names: 192.168.44.124 output of gluster peer status from Node1: ---------------------------------------- ---------------------------------------- gluster peer status Number of Peers: 1 Hostname: 10.70.36.57 Uuid: 2c255203-84b7-4506-bdad-4343d6c7a0eb State: Peer in Cluster (Disconnected) Other names: 192.168.44.125 Version-Release number of selected component (if applicable): glusterfs-3.7dev-0.1009.git8b987be.el6.x86_64 How reproducible: Always. Steps to Reproduce: 1. Have two nodes with two nics eth0 and ib0. 2. Now from node1 peer probe eth0, peer probe ib0 of node1. 3. From node2 peer probe <ib0> of node1. Actual results: gluster peer status goes to disconnected state. Expected results: gluster peer status should always be in connected state. Additional info:
when peer probing two systems which has two different nics, peer status always goes to disconnected state. Let us say i have nodes node1 and node2 having two ethernet cards eth0(10.70.36.x) and eth1 (10.70.33.x) When i try performing peer probe in the following way the node status always goes to disconnected state. From Node1: -------------- -------------- gluster peer probe eth0(node2) gluster peer probe eth1(node2) From Node2: ------------- ------------- gluster peer probe eth1(node1)
With out this feature RHSC QE team is blocked on testing gluster network feature.
REVIEW: http://review.gluster.org/10495 (glusterd: Stricter matching for peerinfo in glusterd_peer_rpc_notify) posted (#1) for review on master by Kaushal M (kaushal)
COMMIT: http://review.gluster.org/10495 committed in master by Kaushal M (kaushal) ------ commit 02583099a219ce327aac62af22b486c7b9fcb531 Author: Kaushal M <kaushal> Date: Wed May 6 13:10:15 2015 +0530 glusterd: Use generation number to find peerinfo in RPC notifications The generation number for each peerinfo object is unique. It can be used to find the exact peerinfo object, which is required for peer RPC notifications. Using hostname and uuid matching to find peerinfos can return incorrect peerinfos to be returned in certain cases like multi network peer probe. This could cause updates to happen to incorrect peerinfos. Change-Id: Ia0aada8214fd6d43381e5afd282e08d53a277251 BUG: 1215018 Signed-off-by: Kaushal M <kaushal> Reviewed-on: http://review.gluster.org/10495 Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System Reviewed-by: Atin Mukherjee <amukherj>
RDMA feature testing from console point of view is failing and it's failing even without console. So, it's marked as a blocker.
REVIEW: http://review.gluster.org/10623 (glusterd: Use generation number to find peerinfo in RPC notifications) posted (#1) for review on release-3.7 by Kaushal M (kaushal)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user