Bug 1004699 - glusterd: If RHSS is already part of another cluster and User tries to add it using command 'gluster peer probe <hostname/ip>' ; It is failing but not giving reason for failure
Summary: glusterd: If RHSS is already part of another cluster and User tries to add it...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
: ---
Assignee: Nagaprasad Sathyanarayana
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1010153 1237022 1252448
TreeView+ depends on / blocked
 
Reported: 2013-09-05 09:28 UTC by Rachana Patel
Modified: 2016-03-21 10:43 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1010153 1237022 (view as bug list)
Environment:
Last Closed: 2015-12-03 17:11:42 UTC
Embargoed:


Attachments (Terms of Use)

Description Rachana Patel 2013-09-05 09:28:13 UTC
Description of problem:
glusterd: If RHSS is already part of another cluster and User tries to add it using command 'gluster peer probe <hostname/ip>' ; It is failing  with error 'peer probe: failed:' but not giving reason for failure


Version-Release number of selected component (if applicable):
3.4.0.30rhs-2.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. had a cluster of 2 RHSS
[root@DHT2 ~]# gluster peer status
Number of Peers: 1

Hostname: 10.70.37.195
Uuid: 0d0f02d7-1dd1-4252-bff9-3c28b113fba0
State: Peer in Cluster (Connected)


2. try to add one of this RHSS node from 3rd RHSS as below

[root@DHT3 ~]# gluster peer probe 10.70.37.66
peer probe: failed: 
[root@DHT3 ~]# echo $?
1



Actual results:
It is not giving any reason for failure

Expected results:
It should give reason for failure. It should say that host is already part of another cluster

-> In Anshi It was giving reason for failures as below:-

[root@localhost ~]# gluster peer probe 10.70.42.186
10.70.42.186 is already part of another cluster

[root@localhost ~]# glusterfs -V
glusterfs 3.3.0.7rhs built on Mar 20 2013 13:29:01
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU
General Public License.


Additional info:
log snippet:-
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log 

[2013-09-05 05:08:24.320028] I [glusterd-handler.c:821:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 10.70.37.66 24007
[2013-09-05 05:08:24.577461] I [glusterd-handler.c:2905:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: 10.70.37.66 (24007)
[2013-09-05 05:08:24.730458] I [rpc-clnt.c:974:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2013-09-05 05:08:24.730618] I [socket.c:3487:socket_init] 0-management: SSL support is NOT enabled
[2013-09-05 05:08:24.730646] I [socket.c:3502:socket_init] 0-management: using system polling thread
[2013-09-05 05:08:24.735911] I [glusterd-handler.c:2886:glusterd_friend_add] 0-management: connect returned 0
[2013-09-05 05:08:24.936651] I [glusterd-rpc-ops.c:235:__glusterd_probe_cbk] 0-glusterd: Received probe resp from uuid: bdfdd4e6-9d7a-4759-8c20-bb5e76adc3d5, host: 10.70.37.66
[2013-09-05 05:08:24.936980] I [mem-pool.c:539:mem_pool_destroy] 0-management: size=2236 max=2 total=4
[2013-09-05 05:08:24.937095] I [mem-pool.c:539:mem_pool_destroy] 0-management: size=124 max=2 total=4

Comment 2 Amar Tumballi 2013-09-05 12:13:49 UTC
Can we remove 'blocker' flag even if its regression? not giving a devel-ack to fix it in time now.

Comment 3 Sachidananda Urs 2013-09-05 12:40:49 UTC
Amar, removed the blocker since this is cosmetic (Error message updation).

Comment 7 Vivek Agarwal 2015-12-03 17:11:42 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.