Bug 1224880 - [Backup]: Unable to delete session entry from glusterfind list
Summary: [Backup]: Unable to delete session entry from glusterfind list
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfind
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 3.1.2
Assignee: Aravinda VK
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On: 1224064 1225465 1256307
Blocks: 1202842 1216951 1223636 1260783
TreeView+ depends on / blocked
 
Reported: 2015-05-26 06:10 UTC by Sweta Anandpara
Modified: 2016-09-17 15:19 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.5-7
Doc Type: Bug Fix
Doc Text:
Previously, when a volume and the backend data present in the bricks was deleted the session entry was not deleted from the glusterfind list. With this fix, when a gluster volume is deleted, the respective glusterfind session directories/files for that volume will be deleted and a new session can be created with same name.
Clone Of:
Environment:
Last Closed: 2016-03-01 05:24:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Sweta Anandpara 2015-05-26 06:10:05 UTC
Description of problem:

Had a volume 'vol1' and a glusterfind session 'svol1'. Deleted the volume and the backend data present in the bricks. But the session entry did not get deleted (bug 1224064). Then, when the command glusterfind delete command is given explicitly, it worked for the other (such) session entries (for other volumes), but failed to delete the entry for 'vol1'.

It threw what looked like a syntactical error:

[root@dhcp43-140 thinbrick2]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
svol1                     vol1                      2015-05-22 17:02:01      
sess1                     ozone                     Session Corrupted        
[root@dhcp43-140 thinbrick2]# 
[root@dhcp43-140 thinbrick2]# glusterfind delete svol1 vol1
Failed to parse Volume Info: 'NoneType' object has no attribute 'findall'
[root@dhcp43-140 thinbrick2]#

Version-Release number of selected component (if applicable):
glusterfs-3.7.0-2.el6rhs.x86_64

How reproducible: 1:1


Steps to Reproduce:
1. Create a volume and a glusterfind session
2. Verify the glusterfind entry is seen in the output of glusterfind list
3. Delete the volume
4. Delete all the backend brick data
5. Verify that glusterfind list still has the session entry related to that volume (bug 1224064)
6. Execute glusterfind delete command

Actual results:

Step6 fails with the above mentioned error, when it is not able to find any volume related information

Expected results:

Step6 should be successful in deleting glusterfind list entry

Comment 2 Sweta Anandpara 2015-05-26 06:15:49 UTC
Sosreport copied at: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1224880/

Comment 3 Aravinda VK 2015-05-27 07:55:27 UTC
Duplicate of BZ 1224046. Not closing this bug since symptoms are different bug root cause is same.

Comment 6 Aravinda VK 2015-06-12 10:20:28 UTC
Downstream Patch https://code.engineering.redhat.com/gerrit/#/c/50541/

Comment 9 Sweta Anandpara 2015-07-04 05:28:55 UTC
Verified this again on the build 3.7.1-6

As the dependent bug 1224064 's issue was not seen, in the first cut. But after a few volume starts and stops, ended up hitting the above mentioned bug - having a stale glusterfind list entry even after deleting the volume. 

Will still be unable to confidently move the  present bug to 'verified', until the issue mentioned in bug 1224064 is fixed completely.

[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster  v list
gv1
slave
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# rpm -qa | grep glusterfs
glusterfs-client-xlators-3.7.1-6.el6rhs.x86_64
glusterfs-server-3.7.1-6.el6rhs.x86_64
glusterfs-3.7.1-6.el6rhs.x86_64
glusterfs-api-3.7.1-6.el6rhs.x86_64
glusterfs-cli-3.7.1-6.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-6.el6rhs.x86_64
glusterfs-libs-3.7.1-6.el6rhs.x86_64
glusterfs-fuse-3.7.1-6.el6rhs.x86_64
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cd /rhs/thinbrick2
[root@dhcp43-93 thinbrick2]# ls
nash  ozone  pluto  slave  vol1
[root@dhcp43-93 thinbrick2]# rm -rf vol1
[root@dhcp43-93 thinbrick2]# rm -rf pluto
[root@dhcp43-93 thinbrick2]# rm -rf nash
[root@dhcp43-93 thinbrick2]# ls
ozone  slave
[root@dhcp43-93 thinbrick2]# gluster v list
gv1
slave
[root@dhcp43-93 thinbrick2]# rm -rf ozone
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# ls -a
.  ..  slave
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# cd
[root@dhcp43-93 ~]# 
[root@dhcp43-93 thinbrick2]# ls
slave
[root@dhcp43-93 thinbrick2]# gluster  v create vol1 10.70.43.93:/rhs/thinbrick1/vol1 10.70.43.155:/rhs/thinbrick1/vol1 10.70.43.93:/rhs/thinbrick2/vol1 10.70.43.155:/rhs/thinbrick2/vol1
volume create: vol1: success: please start the volume to access data
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# gluster v status
Status of volume: gv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/b1                   49154     0          Y       13880
NFS Server on localhost                     2049      0          Y       13881
NFS Server on 10.70.43.155                  2049      0          Y       23445
 
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: slave
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/thinbrick1/slave     49152     0          Y       13892
Brick 10.70.43.155:/rhs/thinbrick1/slave    49152     0          Y       23444
Brick 10.70.43.93:/rhs/thinbrick2/slave     49153     0          Y       13901
Brick 10.70.43.155:/rhs/thinbrick2/slave    49153     0          Y       23455
NFS Server on localhost                     2049      0          Y       13881
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
NFS Server on 10.70.43.155                  2049      0          Y       23445
Self-heal Daemon on 10.70.43.155            N/A       N/A        N       N/A  
 
Task Status of Volume slave
------------------------------------------------------------------------------
There are no active volume tasks
 
Volume vol1 is not started
 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glutser v info vol1
-bash: glutser: command not found
[root@dhcp43-93 thinbrick2]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind create sv1 vol1 
Volume vol1 is not online
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind create fdsfds vol1
Volume vol1 is not online
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# ls /var/lib/glusterd/glusterfind/
ss1  ss2
[root@dhcp43-93 thinbrick2]# ls
slave  vol1
[root@dhcp43-93 thinbrick2]# cd
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/
cli.log  fdsfds  nash  plutos1  ps1  ps2  ps3  sess21  sessn1  sessn2  sessn3  sessn4  sesso1  sesso2  sesso3  sessp1  sessp2  sessv1  sgv1  ss1  ss2  sumne  sv1  vol1s1  vol1s2  vol1s3
[root@dhcp43-93 ~]# cat /var/log/glusterfs/glusterfind/sv1/vol1/cli.log 
[2015-07-04 15:48:32,839] ERROR [utils - 152:fail] - Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv1 vol1
Session sv1 created with volume vol1
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2  sv1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1
vol1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v status vol1
Volume vol1 is not started
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv2 vol1
Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
.keys/ ss1/   ss2/   sv1/   
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status                             sv1_vol1_secret.pem                sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.t
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Pre script is not run
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1^C
[root@dhcp43-93 ~]# glusterfind delete sv2 vol1
Invalid session sv2
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.txt
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
.keys/ ss1/   ss2/   sv1/   
[root@dhcp43-93 ~]#
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status      %2Frhs%2Fthinbrick2%2Fvol1.status      status                                 sv1_vol1_secret.pem
%2Frhs%2Fthinbrick1%2Fvol1.status.pre  %2Frhs%2Fthinbrick2%2Fvol1.status.pre  status.pre                             sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status      %2Frhs%2Fthinbrick2%2Fvol1.status      status      sv1_vol1_secret.pem
%2Frhs%2Fthinbrick1%2Fvol1.status.pre  %2Frhs%2Fthinbrick2%2Fvol1.status.pre  status.pre  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.txt
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1
root.43.155's password: root.43.155's password: 


root.43.155's password: 
root.43.155's password: 
10.70.43.155 - delete failed: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Command delete failed in 10.70.43.155:/rhs/thinbrick1/vol1
[root@dhcp43-93 ~]# glusterfind lsit
usage: glusterfind [-h] {pre,create,list,post,delete} ...
glusterfind: error: argument mode: invalid choice: 'lsit' (choose from 'pre', 'create', 'list', 'post', 'delete')
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# glusterfind lsit
usage: glusterfind [-h] {pre,create,list,post,delete} ...
glusterfind: error: argument mode: invalid choice: 'lsit' (choose from 'pre', 'create', 'list', 'post', 'delete')
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/
cli.log  nash/    ps1/     ps3/     sessn1/  sessn3/  sesso1/  sesso3/  sessp2/  sgv1/    ss2/     sv1/     vol1s1/  vol1s3/  
fdsfds/  plutos1/ ps2/     sess21/  sessn2/  sessn4/  sesso2/  sessp1/  sessv1/  ss1/     sumne/   sv2/     vol1s2/  
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/sv1/vol1/c
changelog.1c27a488a584181d698698190ce633eae6ab4a90.log  changelog.log                                           
changelog.b85984854053ba4529aeaba8bd2c93408cb68773.log  cli.log                                                 
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/sv1/vol1/^C
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv1 vol1
glutSession sv1 created with volume vol1
[root@dhcp43-93 ~]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Volume vol1 is not online
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Pre script is not run
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v  info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Session sv1 with volume vol1 updated
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status                             sv1_vol1_secret.pem                sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/^C
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# rpm -qa | grep gluster
glusterfs-client-xlators-3.7.1-6.el6rhs.x86_64
glusterfs-server-3.7.1-6.el6rhs.x86_64
glusterfs-3.7.1-6.el6rhs.x86_64
glusterfs-api-3.7.1-6.el6rhs.x86_64
glusterfs-cli-3.7.1-6.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-6.el6rhs.x86_64
glusterfs-libs-3.7.1-6.el6rhs.x86_64
glusterfs-fuse-3.7.1-6.el6rhs.x86_64
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# volume stop vol1
-bash: volume: command not found
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: failed: Volume vol1 is not in the started state
[root@dhcp43-93 ~]#
[root@dhcp43-93 ~]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# gluster v delete vol1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: vol1: success
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2  sv1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind delete sv2 vol1
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1
Unable to get volume details
[root@dhcp43-93 ~]#

Comment 10 Sweta Anandpara 2015-07-04 08:06:43 UTC
Moving it back to assigned. I understand that the present behavior will not show up at all, if bug 1224064 is fixed. 

Would ideally put this bug in a 'blocked' state, rather than 'failed-qa'. But that field does not exist.
Please feel free to move this bug to ON_QA when the above mentioned bug is fixed.

Comment 15 monti lawrence 2015-07-22 20:25:22 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 16 Aravinda VK 2015-07-27 06:08:51 UTC
(In reply to monti lawrence from comment #15)
> Doc text is edited. Please sign off to be included in Known Issues.

doc text looks good to me.

Comment 19 Aravinda VK 2015-08-18 09:50:23 UTC
Patch posted upstream
http://review.gluster.org/#/c/11298/
http://review.gluster.org/#/c/11699/

Comment 24 Aravinda VK 2015-11-19 04:36:10 UTC
Downstream Patch
https://code.engineering.redhat.com/gerrit/#/c/61779

Comment 25 Anil Shah 2015-12-07 10:06:00 UTC
Once the volume is deleted , hooks script will clean up all the entries for the session and there will not be any entry in glusterfind session , so we can't delete the session with glusterfind delete command.

Also As per comment 4, Marking this bug verified since bug 1224064 is verified.

Comment 27 errata-xmlrpc 2016-03-01 05:24:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.