Bug 1260119 - [BACKUP]: If more than 1 node in cluster are not added in known_host, glusterfind create command hungs
Summary: [BACKUP]: If more than 1 node in cluster are not added in known_host, gluster...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfind
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1255689 1260918 1284735
TreeView+ depends on / blocked
 
Reported: 2015-09-04 13:01 UTC by Rahul Hinduja
Modified: 2018-04-16 03:03 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Glusterfind command must be executed from one node of the cluster. If all the nodes of cluster are not added in known_hosts list of the command initiated node then glusterfind create command hangs. Workaround: Add all the hosts in peer including local node to known_hosts.
Clone Of:
: 1260918 (view as bug list)
Environment:
Last Closed: 2018-04-16 03:03:34 UTC
Embargoed:


Attachments (Terms of Use)

Description Rahul Hinduja 2015-09-04 13:01:14 UTC
Description of problem:
======================

If more than 1 node from cluster do not have entry in the known_host of a node which is creating glusterfind session, the create hungs forever.

[root@georep1 scripts]# glusterfind create s1 master
The authenticity of host '10.70.46.97 (10.70.46.97)' can't be established.
ECDSA key fingerprint is 76:e4:6d:07:1e:82:26:1c:0a:95:b2:4c:a3:3f:f1:e2.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.46.154 (10.70.46.154)' can't be established.
ECDSA key fingerprint is b4:a8:00:41:ec:f8:12:a9:89:88:cb:7a:20:a8:83:3c.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.46.97 (10.70.46.97)' can't be established.
ECDSA key fingerprint is 76:e4:6d:07:1e:82:26:1c:0a:95:b2:4c:a3:3f:f1:e2.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.46.154 (10.70.46.154)' can't be established.
ECDSA key fingerprint is b4:a8:00:41:ec:f8:12:a9:89:88:cb:7a:20:a8:83:3c.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.46.97 (10.70.46.97)' can't be established.
ECDSA key fingerprint is 76:e4:6d:07:1e:82:26:1c:0a:95:b2:4c:a3:3f:f1:e2.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.46.93 (10.70.46.93)' can't be established.
ECDSA key fingerprint is 0d:bc:e3:70:e0:86:65:5e:3e:d2:ea:9c:fb:a9:53:66.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.46.93 (10.70.46.93)' can't be established.
ECDSA key fingerprint is 0d:bc:e3:70:e0:86:65:5e:3e:d2:ea:9c:fb:a9:53:66.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.46.93 (10.70.46.93)' can't be established.
ECDSA key fingerprint is 0d:bc:e3:70:e0:86:65:5e:3e:d2:ea:9c:fb:a9:53:66.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.46.154 (10.70.46.154)' can't be established.
ECDSA key fingerprint is b4:a8:00:41:ec:f8:12:a9:89:88:cb:7a:20:a8:83:3c.
Are you sure you want to continue connecting (yes/no)? yes


[root@georep1 scripts]# cat /root/.ssh/known_hosts


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.1-14.el7rhgs.x86_64

How reproducible:
=================

Always

Steps to Reproduce:
===================
1. Flush the known_hosts from node or remove the cluster host entries
2. Create glusterfind session


Actual results:
===============
glusterfind session creation hungs

Expected results:
================

Should create the session

Comment 5 Aravinda VK 2015-09-07 10:52:20 UTC
RCA:

While connecting to other nodes programatically, Geo-rep uses an additional option with ssh(-oStrictHostKeyChecking=no). We need to use the option with Glusterfind too.

Other issue is about asking yes/no prompt for localhost, which is during scp command. We need to use the same option as used in ssh. Other fix is required in not running scp command if local node.

Workaround:
Add all the hosts in peer including local node to known_hosts.

Comment 6 Anjana Suparna Sriram 2015-09-29 12:06:38 UTC
Hi Aravinda,

Please review the edited doc text and sign-off to be included.

Regards,
Anjana

Comment 7 Aravinda VK 2015-09-30 05:57:40 UTC
(In reply to Anjana Suparna Sriram from comment #6)
> Hi Aravinda,
> 
> Please review the edited doc text and sign-off to be included.
> 
> Regards,
> Anjana

Doc text looks good to me.

Comment 8 Amar Tumballi 2018-04-16 03:03:34 UTC
Feel free to open this bug if the issue still persists and you require a fix. Closing this as WONTFIX as we are not working on this bug, and treating it as a 'TIMEOUT'.


Note You need to log in before you can comment on or make changes to this bug.