Description of problem: When a glusterfind session is created on one of the nodes, glusterfind list displays the healthy state of the session. However, on the peer node, it tries to open the status file at the $GLUSTERD_WORKDIR, not finding which, updates the state of the session as 'corrupted'. The error displayed in the logs (at the peer node where it shows as failed): [2015-06-17 15:15:24,124] ERROR [utils - 152:fail] - Error Opening Session file /var/lib/glusterd/glusterfind/sessn4/nash/status: [Errno 2] No such file or directory: '/var/lib/glusterd/glusterfind/sessn4/nash/status' Version-Release number of selected component (if applicable): glusterfs-3.7.1-3.el6rhs.x86_64 How reproducible: Always Steps to Reproduce: 1. Have a >=2 node cluster, with a volume (of any type), say 'vol1' 2. Create a glusterfind session from node1 and verify the output of glusterfind list 3. Type the same command 'glusterfind list' on the peer node(s) and verify the output Actual results: On the peer node, step3 displays the session as 'corrupted' Expected results: The state of the glusterfind session should be displayed as healthy when viewed from any of the peer node Additional info: [root@dhcp43-93 ~]# [root@dhcp43-93 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- sessn2 nash 2015-06-16 20:17:24 sessn3 nash 2015-06-16 17:47:02 sessp1 pluto 2015-06-16 21:15:06 sesso1 ozone 2015-06-15 23:48:42 sessn1 nash 2015-06-16 18:02:11 sessp2 pluto 2015-06-16 21:12:53 [root@dhcp43-93 ~]# gluster v list nash [root@dhcp43-93 ~]# glusterfind create sessn4 nash Failed to set volume option build-pgfid on: volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again [root@dhcp43-93 ~]# glusterfind create sessn4 nash Session sessn4 created with volume nash [root@dhcp43-93 ~]# [root@dhcp43-93 ~]# [root@dhcp43-93 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- sessn2 nash 2015-06-16 20:17:24 sessn3 nash 2015-06-16 17:47:02 sessp1 pluto 2015-06-16 21:15:06 sessn4 nash 2015-06-17 15:14:18 sesso1 ozone 2015-06-15 23:48:42 sessn1 nash 2015-06-16 18:02:11 sessp2 pluto 2015-06-16 21:12:53 [root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/ .keys/ sessn1/ sessn2/ sessn3/ sessn4/ sesso1/ sesso2/ sesso3/ sessp1/ sessp2/ sessv1/ [root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sessn4/nash/ %2Frhs%2Fthinbrick1%2Fnash.status %2Frhs%2Fthinbrick2%2Fnash.status sessn4_nash_secret.pem sessn4_nash_secret.pem.pub status [root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sessn4/nash/^C [root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/ .keys/ sessn1/ sessn2/ sessn3/ sessn4/ sesso1/ sesso2/ sesso3/ sessp1/ sessp2/ sessv1/ [root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sessn4/nash/ %2Frhs%2Fthinbrick1%2Fnash.status %2Frhs%2Fthinbrick2%2Fnash.status sessn4_nash_secret.pem sessn4_nash_secret.pem.pub status [root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/ cli.log nash/ sess21/ sessn1/ sessn2/ sessn3/ sessn4/ sesso1/ sesso2/ sesso3/ sessp1/ sessp2/ sessv1/ [root@dhcp43-93 ~]# vi /var/log/glusterfs/glusterfind/sessn4/nash/cli.log ############### peer node ########################3 [root@dhcp43-155 ~]# [root@dhcp43-155 ~]# # after creating a new session 'sessn4' [root@dhcp43-155 ~]# [root@dhcp43-155 ~]# [root@dhcp43-155 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- sessp1 pluto Session Corrupted sessn3 nash Session Corrupted sessp2 pluto Session Corrupted sessn4 nash Session Corrupted sessn2 nash Session Corrupted sessn1 nash Session Corrupted sesso1 ozone Session Corrupted [root@dhcp43-155 ~]# vi /var/log/glusterfs/glusterfind/ cli.log sessn1/ sessn2/ sesso1/ sessp1/ sessp2/ [root@dhcp43-155 ~]# glusterfind pre sessn4 nash /tmp/outn.txt Error Opening Session file /var/lib/glusterd/glusterfind/sessn4/nash/status: [Errno 2] No such file or directory: '/var/lib/glusterd/glusterfind/sessn4/nash/status' [root@dhcp43-155 ~]# ls /var/lib/glusterd/glusterfind/ .keys/ sessn1/ sessn2/ sessn3/ sessn4/ sesso1/ sesso2/ sesso3/ sessp1/ sessp2/ sessv1/ [root@dhcp43-155 ~]# ls /var/lib/glusterd/glusterfind/sessn4/nash/%2Frhs%2Fthinbrick %2Frhs%2Fthinbrick1%2Fnash.status %2Frhs%2Fthinbrick2%2Fnash.status [root@dhcp43-155 ~]# ls /var/lib/glusterd/glusterfind/sessn4/nash/%2Frhs%2Fthinbrick^C [root@dhcp43-155 ~]# ls /var/log/glusterfs/glusterfind/sessn4/nash/ cli.log [root@dhcp43-155 ~]# vi /var/log/glusterfs/glusterfind/sessn4/nash/cli.log [root@dhcp43-155 ~]# rpm -qa | grep glustefs [root@dhcp43-155 ~]# rpm -qa | grep glusterfs glusterfs-api-3.7.1-3.el6rhs.x86_64 glusterfs-libs-3.7.1-3.el6rhs.x86_64 glusterfs-3.7.1-3.el6rhs.x86_64 glusterfs-fuse-3.7.1-3.el6rhs.x86_64 glusterfs-server-3.7.1-3.el6rhs.x86_64 glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64 glusterfs-cli-3.7.1-3.el6rhs.x86_64 [root@dhcp43-155 ~]#
glusterfind commands will not work in peer nodes other than the initiated node. We need to enhance glusterfind to use Meta volume to save status files and collect pem keys from all nodes and distribute to all Nodes of cluster. With this enhancement, we can run glusterfind command from any peer node. For this bug I will send patch to prevent showing the session row in peer nodes other than initiated node.
Upstream patch sent http://review.gluster.org/11699
Doc text is edited. Please sign off to be included in Known Issues.
(In reply to monti lawrence from comment #6) > Doc text is edited. Please sign off to be included in Known Issues. Doc text looks good to me.
Upstream patch posted. http://review.gluster.org/#/c/11699/
Downstream patch https://code.engineering.redhat.com/gerrit/#/c/56497/
Verified with build: glusterfs-3.7.1-14.el7rhgs.x86_64 Session is listed only from the main node from where it is created and from other peer it shows as "No Session" which is as per comment 4 [root@georep1 ~]# glusterfind create sessi2 master Session sessi2 created with volume master [root@georep1 ~]# [root@georep1 ~]# [root@georep1 ~]# [root@georep1 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- sessi2 master 2015-09-04 11:04:44 [root@georep1 ~]# [root@georep1 ~]# glusterfind create gf_session master Session gf_session created with volume master [root@georep1 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- gf_session master 2015-09-04 11:10:04 sessi2 master 2015-09-04 11:04:44 [root@georep1 ~]# Peer Node: ========== [root@georep2 ~]# glusterfind list No sessions found [root@georep2 ~]# [root@georep3 ~]# glusterfind list No sessions found [root@georep3 ~]# [root@georep4 ~]# glusterfind list No sessions found [root@georep4 ~]# Moving this bug to verified state and will open doc bug to make sure this is captured against "glusterfind list" in ADMIN guide.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html