Description of problem: When we choose to not go ahead with a particular session, we use glusterfind delete to delete the entry from the list. However, that leaves the session directory and all the information under it (at the location /var/lib/glusterd/glusterfind/) intact. True, it doesn't hamper or come in the way of creating another session with the same name, but it does lead to unnecessary confusion. Same is the case with LOGS at /var/log/glusterfs/glusterfind/. But we would want to store the logs for some later point of time in future. Version-Release number of selected component (if applicable): glusterfs-3.7.0-3.el6rhs.x86_64 How reproducible: Always Steps to Reproduce: 1. Have a 2node cluster, create a volume 'ozone' and a glusterfind session 'sesso1' 2. Run pre and post a couple of times 3. Execute glusterfind delete to delete the session 'sesso1' 4. Check the contents of /var/lib/glusterd/glusterfind/sesso1/ozone Actual results: Step4 displays the status information related to 'sesso1' Expected results: Glusterfind delete should delete teh directory 'sesso1' and its entire contents
Tested and verified this on the build glusterfs-3.7.1-4.el6rhs.x86_64. Whenever glusterfind delete is executed for an already-created session, it deletes teh session entry from the list. Also, it does delete the session directory created under $GLUSTERD_WORKDIR, and all its contents, as expected. Pasted below are the logs. Did observe an unexpected issue where it prompts for a password everytime glusterfind delete is given, separate bug for which will be filed. Moving this bug to verfied for Everglades 3.1. The rest of the regression is executed - logs of which can be found at the link: https://polarion.engineering.redhat.com/polarion/#/project/RHG3/testrun?id=glusterfs-3_7_1_3_RHEL6_7_FUSE [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# gluster v list gluster_shared_storage ozone [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# rpm -qa | grep glusterfs glusterfs-libs-3.7.1-4.el6rhs.x86_64 glusterfs-api-3.7.1-4.el6rhs.x86_64 glusterfs-client-xlators-3.7.1-4.el6rhs.x86_64 glusterfs-fuse-3.7.1-4.el6rhs.x86_64 glusterfs-cli-3.7.1-4.el6rhs.x86_64 glusterfs-3.7.1-4.el6rhs.x86_64 glusterfs-server-3.7.1-4.el6rhs.x86_64 [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# glusterfind create fdjksl fjdksl Unable to get volume details: Volume fjdksl does not exist [root@dhcp43-191 ~]# glusterfind create fdjksl whatever Unable to get volume details: Volume whatever does not exist [root@dhcp43-191 ~]# glusterfind create fdjksl 4327894523 Unable to get volume details: Volume 4327894523 does not exist [root@dhcp43-191 ~]# glusterfind create fdjksl %^*fdjkls Unable to get volume details: Volume %^*fdjkls does not exist [root@dhcp43-191 ~]# glusterfind create fdjksl ozone Session fdjksl created with volume ozone [root@dhcp43-191 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- sesso3 ozone 2015-06-18 16:27:30 sesso1 ozone 2015-06-19 23:20:19 sesso5 ozone 2015-06-20 00:18:03 fdjksl ozone 2015-06-22 14:58:09 sesso2 ozone 2015-06-19 22:44:40 sesso4 ozone 2015-06-18 16:27:38 [root@dhcp43-191 ~]# glusterfind create fdjksl ozone123 Unable to get volume details: Volume ozone123 does not exist [root@dhcp43-191 ~]# glusterfind create fdjksl ozone Session fdjksl already created [root@dhcp43-191 ~]# glusterfind create fds "ozone " Unable to get volume details: Volume ozone does not exist [root@dhcp43-191 ~]# glusterfind create fds "ozone" Session fds created with volume ozone [root@dhcp43-191 ~]# glusterfind create fds "ozone ozone" Unable to get volume details: Volume ozone ozone does not exist [root@dhcp43-191 ~]# glusterfind create 5543 ozone Session 5543 created with volume ozone [root@dhcp43-191 ~]# glusterfind create ^%&* ozone [1] 22820 -bash: anaconda-ks.cfg: command not found [root@dhcp43-191 ~]# usage: glusterfind create [-h] [--debug] [--force] [--reset-session-time] session volume glusterfind create: error: too few arguments [1]+ Exit 2 glusterfind create ^% [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# glusterfind create ^%#@* ozone 10.70.42.202 - create failed: percent_expand: unknown key %# 10.70.42.147 - create failed: percent_expand: unknown key %# 10.70.42.202 - create failed: percent_expand: unknown key %# 10.70.42.147 - create failed: percent_expand: unknown key %# 10.70.42.30 - create failed: percent_expand: unknown key %# 10.70.42.30 - create failed: percent_expand: unknown key %# Command create failed in 10.70.42.202:/rhs/thinbrick1/ozone [root@dhcp43-191 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- fds ozone 2015-06-22 14:58:44 sesso3 ozone 2015-06-18 16:27:30 sesso1 ozone 2015-06-19 23:20:19 ^%#@* ozone Session Corrupted sesso5 ozone 2015-06-20 00:18:03 fdjksl ozone 2015-06-22 14:58:09 sesso2 ozone 2015-06-19 22:44:40 5543 ozone 2015-06-22 14:59:10 sesso4 ozone 2015-06-18 16:27:38 [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# glusterfind delete 5543 ozone root.42.147's password: root.42.30's password: root.42.202's password: root.42.147's password: root.42.30's password: root.42.147's password: root.42.147's password: Session 5543 with volume ozone deleted [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- fds ozone 2015-06-22 14:58:44 sesso3 ozone 2015-06-18 16:27:30 sesso1 ozone 2015-06-19 23:20:19 ^%#@* ozone Session Corrupted sesso5 ozone 2015-06-20 00:18:03 fdjksl ozone 2015-06-22 14:58:09 sesso2 ozone 2015-06-19 22:44:40 sesso4 ozone 2015-06-18 16:27:38 [root@dhcp43-191 ~]# glusterfind delete 5543 ozone Invalid session 5543 [root@dhcp43-191 ~]# glusterfind delete fgdsdfd ozone Invalid session fgdsdfd [root@dhcp43-191 ~]# glusterfind delete fdjksl ozone root.42.147's password: root.42.30's password: root.42.147's password: root.42.30's password: root.42.30's password: Session fdjksl with volume ozone deleted [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# glusterfind delete fds ozone root.42.147's password: root.42.147's password: root.42.147's password: Session fds with volume ozone deleted [root@dhcp43-191 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- sesso3 ozone 2015-06-18 16:27:30 sesso1 ozone 2015-06-19 23:20:19 ^%#@* ozone Session Corrupted sesso5 ozone 2015-06-20 00:18:03 sesso2 ozone 2015-06-19 22:44:40 sesso4 ozone 2015-06-18 16:27:38 [root@dhcp43-191 ~]# vi /var/lib/glusterd/glusterfind/ ^%#@*/ .keys/ sesso1/ sesso2/ sesso3/ sesso4/ sesso5/ [root@dhcp43-191 ~]# ls -l /var/lib/glusterd/glusterfind/ total 24 drwxr-xr-x. 3 root root 4096 Jun 22 14:59 ^%#@* drwxr-xr-x. 3 root root 4096 Jun 18 16:25 sesso1 drwxr-xr-x. 3 root root 4096 Jun 18 16:26 sesso2 drwxr-xr-x. 3 root root 4096 Jun 18 16:27 sesso3 drwxr-xr-x. 3 root root 4096 Jun 18 16:27 sesso4 drwxr-xr-x. 3 root root 4096 Jun 20 00:07 sesso5 [root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/\^%#\@\*/ ozone [root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/\^%#\@\*/ozone %2Frhs%2Fthinbrick1%2Fozone.status %2Frhs%2Fthinbrick2%2Fozone.status ^%#@*_ozone_secret.pem ^%#@*_ozone_secret.pem.pub [root@dhcp43-191 ~]# vi /var/log/glusterfs/glusterfind/ ^%#@*/ 5543/ cli.log fdjksl/ fds/ sesso1/ sesso2/ sesso3/ sesso4/ sesso5/ [root@dhcp43-191 ~]# vi /var/log/glusterfs/glusterfind/\^%#\@\*/ozone/cli.log [root@dhcp43-191 ~]# vi /var/log/glusterfs/glusterfind/fds/ozone/cli.log [root@dhcp43-191 ~]# vi /var/log/glusterfs/glusterfind/fdjksl/ozone/cli.log [root@dhcp43-191 ~]# vi /var/log/glusterfs/glusterfind/5543/ozone/cli.log [root@dhcp43-191 ~]# vi /var/log/glusterfs/glusterfind/sesso1/ozone/cli.log [root@dhcp43-191 ~]# [root@dhcp43-191 ~]# [root@dhcp43-191 ~]#
Correcting the link of log location of the functional testing that was executed in and around this bug, and the rest of the feature: https://polarion.engineering.redhat.com/polarion/testrun-attachment/RHG3/glusterfs-3_7_1_3_RHEL6_7_FUSE/RHG3-5400_Logs_6.7_3.7.1-3_output_file_validation.odt
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html