Description of problem: Glusterfind create creates a session, and distributes the public key to all its peers. That is used in the subsequent 'glusterfind pre' command to find the modified files at brick level of every node.BUT the 'glusterfind pre' command prompts the user for the password of peer, which it is otherwise expected to take care of, internally. Version-Release number of selected component (if applicable): 3.7 Upstream build glusterfs-server-3.7dev-0.777.git2308c07.el6.x86_64 How reproducible: Always Steps to Reproduce: 1. Have a 2 node cluster, and a 2*2 volume 'nash' 2. Execute the glusterfind create command to create a session for the volume 'nash' 3. Execute the glusterfind pre command to start the brick crawl to get the changed list of files Actual results: Step 3 prompts the user for password of peer. And it does this multiple times, even after giving the correct password. Expected results: Step 3 should not prompt the user for password. Step 2 should have taken care of setting up password less ssh from node it is run, to all its peers. Additional info: Was not able to reproduce this in my older setup, after doing a ssh-copyid manually. Created a brand new setup and re did the steps to recreate the issue. The setup is in the same state if you would want to have a look. Node1: 10.70.43.48 Node2: 10.70.42.147 Authentication: root/redhat I do see the public key copied to /root/.ssh/authorised_keys in the peer. Pasted below are the detailed logs.. [root@dhcp43-48 glusterd]# gluster v i Volume Name: nash Type: Distributed-Replicate Volume ID: cd66179e-6fda-49cf-b40f-be930bc01f6f Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.43.48:/rhs/brick1/dd Brick2: 10.70.42.147:/rhs/brick1/dd Brick3: 10.70.43.48:/rhs/brick2/dd Brick4: 10.70.42.147:/rhs/brick2/dd Options Reconfigured: changelog.changelog: on storage.build-pgfid: on [root@dhcp43-48 glusterd]# [root@dhcp43-48 glusterd]# ls geo-replication glusterd.info glusterfind glustershd groups hooks nfs options peers quotad snaps vols [root@dhcp43-48 glusterd]# ls glusterfind/ [root@dhcp43-48 glusterd]# glusterfind create sess nash [root@dhcp43-48 glusterd]# [root@dhcp43-48 glusterd]# ls geo-replication glusterfind glusterfind.secret.pem.pub groups nfs peers snaps glusterd.info glusterfind.secret.pem glustershd hooks options quotad vols [root@dhcp43-48 glusterd]# cat glusterfind.secret.pem.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvniql8bDymri+Wxr2BI41zqQ1sULDSewGWIR3mNcYEpHTU6eMhgG3csRUZx66LZwjQ/irDHPljltog6CNiPbqoqP0fty23AUBcLUVluinl6q3SCVQvqv1DXZuEG3aNBsJUoRjL9CEGak/aE0G7tvAZBOjOCWLXZIxoYmOsaBszxJdX4ei2COhHlouEch2pQMx3DKLKv5t8gFhmAc4SLI5AytiVsODyCqg9oGinjBEKyPgvDnympmT5lxjpo6P9ww0UqFbECNmYe4a3/XHY5Dp/8wfJH9Vf74wM3pxJ62TUOxqTAkRQmuxNnf3HcZCLaPlKCar8NuW0L0H2AQ+5UR6w== root.eng.blr.redhat.com [root@dhcp43-48 glusterd]# [root@dhcp43-48 glusterd]# [root@dhcp43-48 glusterd]# glusterfind pre sess nash /root/out.txt 10.70.43.48 - Change detection failed 10.70.43.48 - Change detection failed The authenticity of host '10.70.42.147 (10.70.42.147)' can't be established. RSA key fingerprint is 8b:74:e8:f6:9e:55:9b:76:d0:78:28:08:b7:24:97:26. Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.42.147 (10.70.42.147)' can't be established. RSA key fingerprint is 8b:74:e8:f6:9e:55:9b:76:d0:78:28:08:b7:24:97:26. Are you sure you want to continue connecting (yes/no)? yes Please type 'yes' or 'no': yes root.42.147's password: Please type 'yes' or 'no': root.42.147's password: Please type 'yes' or 'no': root.42.147's password: Please type 'yes' or 'no': 10.70.42.147 - Change detection failed 10.70.42.147 - Change detection failed [root@dhcp43-48 glusterd]# cat glusterfind.secret.pem.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvniql8bDymri+Wxr2BI41zqQ1sULDSewGWIR3mNcYEpHTU6eMhgG3csRUZx66LZwjQ/irDHPljltog6CNiPbqoqP0fty23AUBcLUVluinl6q3SCVQvqv1DXZuEG3aNBsJUoRjL9CEGak/aE0G7tvAZBOjOCWLXZIxoYmOsaBszxJdX4ei2COhHlouEch2pQMx3DKLKv5t8gFhmAc4SLI5AytiVsODyCqg9oGinjBEKyPgvDnympmT5lxjpo6P9ww0UqFbECNmYe4a3/XHY5Dp/8wfJH9Vf74wM3pxJ62TUOxqTAkRQmuxNnf3HcZCLaPlKCar8NuW0L0H2AQ+5UR6w== root.eng.blr.redhat.com [root@dhcp43-48 glusterd]# [root@dhcp43-48 glusterd]# glusterfind pre sess nash /root/out.txt --change-detector=brickfind root.42.147's password: root.42.147's password: root.42.147's password: root.42.147's password: root.42.147's password: root.42.147's password: root.42.147's password: root.42.147's password: Generated output file /root/out.txt [root@dhcp43-48 glusterd]# cat /root/out.txt [root@dhcp43-48 glusterd]# ls -l /root/out.txt -rw-r--r--. 1 root root 0 Mar 27 22:48 /root/out.txt [root@dhcp43-48 glusterd]# [root@dhcp43-48 glusterd]# [root@dhcp43-48 glusterd]# ssh 10.70.42.147 root.42.147's password: [root@dhcp43-48 glusterd]#
REVIEW: http://review.gluster.org/10150 (tools/glusterfind: Prevent ssh public key overwrite issue) posted (#1) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/10150 (tools/glusterfind: Prevent ssh public key overwrite issue) posted (#2) for review on master by Aravinda VK (avishwan)
COMMIT: http://review.gluster.org/10150 committed in master by Venky Shankar (vshankar) ------ commit 5cb5d7029216ce71b19fd798a86ef4c384262ba9 Author: Aravinda VK <avishwan> Date: Tue Apr 7 15:05:09 2015 +0530 tools/glusterfind: Prevent ssh public key overwrite issue Same ssh key was used for all the sessions, when multiple sessions created in Cluster, public keys get overwritten by newest sessions. Moved ssh keys to respective session dir. BUG: 1206547 Change-Id: I3d8fac9b24bc7c71445c7b4deae83104693e7dab Signed-off-by: Aravinda VK <avishwan> Reviewed-on: http://review.gluster.org/10150 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Kotresh HR <khiremat> Reviewed-by: Venky Shankar <vshankar> Tested-by: Venky Shankar <vshankar>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user