Bug 1224064 - [Backup]: Glusterfind session entry persists even after volume is deleted
Summary: [Backup]: Glusterfind session entry persists even after volume is deleted
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfind
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: ---
: RHGS 3.1.2
Assignee: Aravinda VK
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1202842 1216951 1223636 1224880 1260783
TreeView+ depends on / blocked
 
Reported: 2015-05-22 07:03 UTC by Sweta Anandpara
Modified: 2016-09-17 15:19 UTC (History)
14 users (show)

Fixed In Version: glusterfs-3.7.5-7
Doc Type: Bug Fix
Doc Text:
Previously, glusterfind session entry persisted even after the volume was deleted. With this fix, when a gluster volume is deleted, the respective glusterfind session directories/files for that volume will also be deleted.
Clone Of:
: 1225465 (view as bug list)
Environment:
Last Closed: 2016-03-01 05:23:56 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Sweta Anandpara 2015-05-22 07:03:48 UTC
Description of problem:
When a volume is created and a corresponding glusterfind session, glusterfind list displays the entry of the session name and volume name. After deletion of the volume, the session entry should get removed from the display output of glusterfind list. Glusterfind list should always have only the list of active glusterfind sessions that are present in that cluster.


Version-Release number of selected component (if applicable):
glusterfs-3.7.0-2.el6rhs.x86_64

How reproducible:
Always


Additional info:


[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v info
 
Volume Name: nash
Type: Distributed-Replicate
Volume ID: ef2333ce-e513-43df-8306-fec77cc479b4
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.140:/rhs/thinbrick1/nash/dd
Brick2: 10.70.42.75:/rhs/thinbrick1/dd
Brick3: 10.70.43.140:/rhs/thinbrick2/nash/dd
Brick4: 10.70.42.75:/rhs/thinbrick2/dd
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: vol1
Type: Distributed-Replicate
Volume ID: 44f06391-1635-4897-98c2-848e5ae92640
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.140:/rhs/brick1/dd
Brick2: 10.70.42.75:/rhs/brick1/dd
Brick3: 10.70.43.140:/rhs/brick2/dd
Brick4: 10.70.42.75:/rhs/brick2/dd
Options Reconfigured:
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
svol1                     vol1                      2015-05-22 17:02:01      
snash                     nash                      2015-05-22 17:02:58      
sess1                     ozone                     Session Corrupted        
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v stop nash
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: nash: success
[root@dhcp43-140 ~]# gluster v delete nash
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: nash: success
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v info
 
Volume Name: vol1
Type: Distributed-Replicate
Volume ID: 44f06391-1635-4897-98c2-848e5ae92640
Status: Stopped
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.140:/rhs/brick1/dd
Brick2: 10.70.42.75:/rhs/brick1/dd
Brick3: 10.70.43.140:/rhs/brick2/dd
Brick4: 10.70.42.75:/rhs/brick2/dd
Options Reconfigured:
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
svol1                     vol1                      2015-05-22 17:02:01      
snash                     nash                      2015-05-22 17:02:58      
sess1                     ozone                     Session Corrupted        
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# rpm -qa | grep gluster
glusterfs-libs-3.7.0-2.el6rhs.x86_64
glusterfs-api-3.7.0-2.el6rhs.x86_64
glusterfs-3.7.0-2.el6rhs.x86_64
glusterfs-fuse-3.7.0-2.el6rhs.x86_64
glusterfs-cli-3.7.0-2.el6rhs.x86_64
glusterfs-geo-replication-3.7.0-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64
glusterfs-server-3.7.0-2.el6rhs.x86_64
[root@dhcp43-140 ~]#

Comment 3 Aravinda VK 2015-05-27 13:00:46 UTC
Upstream patch sent to fix this issue. This is achieved using gluster hooks.(hooks/1/delete/post)

http://review.gluster.org/#/c/10944/

Comment 6 Sweta Anandpara 2015-06-17 04:30:29 UTC
Still hitting this issue on the latest build glusterfs-3.7.1-3.el6rhs.x86_64

volume delete is not removing the glusterfind session entry from glusterfind list, nor is it removing the session information in $GLUSTERD_WORKDIR

[root@dhcp43-155 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sessp1                    pluto                     Session Corrupted        
sessn3                    nash                      Session Corrupted        
sessp2                    pluto                     Session Corrupted        
sessn2                    nash                      Session Corrupted        
sessn1                    nash                      Session Corrupted        
sesso1                    ozone                     Session Corrupted        
[root@dhcp43-155 ~]# cd /var/lib/glusterd/glusterfind/
[root@dhcp43-155 glusterfind]# ls
sessn1  sessn2  sessn3  sesso1  sesso2  sesso3  sessp1  sessp2  sessv1
[root@dhcp43-155 glusterfind]# 
[root@dhcp43-155 glusterfind]# 
[root@dhcp43-155 glusterfind]# cd
[root@dhcp43-155 ~]# 
[root@dhcp43-155 ~]# 
[root@dhcp43-155 ~]# gluster v stop pluto
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: pluto: success
[root@dhcp43-155 ~]# gluster  v delete pluto
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: pluto: success
[root@dhcp43-155 ~]# 
[root@dhcp43-155 ~]# 
[root@dhcp43-155 ~]# gluster v list
nash
[root@dhcp43-155 ~]# gluster v info
 
Volume Name: nash
Type: Distribute
Volume ID: 85a962ea-4cfc-4c15-8712-a7b42566b477
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/nash
Brick2: 10.70.43.93:/rhs/thinbrick2/nash
Brick3: 10.70.43.155:/rhs/thinbrick1/nash
Brick4: 10.70.43.155:/rhs/thinbrick2/nash
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-155 ~]# glutserfind list
-bash: glutserfind: command not found
[root@dhcp43-155 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sessp1                    pluto                     Session Corrupted        
sessn3                    nash                      Session Corrupted        
sessp2                    pluto                     Session Corrupted        
sessn2                    nash                      Session Corrupted        
sessn1                    nash                      Session Corrupted        
sesso1                    ozone                     Session Corrupted        
[root@dhcp43-155 ~]# ls /var/lib/glusterd/glusterfind/
.keys/  sessn1/ sessn2/ sessn3/ sesso1/ sesso2/ sesso3/ sessp1/ sessp2/ sessv1/ 
[root@dhcp43-155 ~]# ls /var/lib/glusterd/glusterfind/sessp1
pluto
[root@dhcp43-155 ~]# ls /var/lib/glusterd/glusterfind/sessp1/pluto
%2Frhs%2Fthinbrick1%2Fpluto.status  %2Frhs%2Fthinbrick1%2Fpluto.status.pre  %2Frhs%2Fthinbrick2%2Fpluto.status  %2Frhs%2Fthinbrick2%2Fpluto.status.pre
[root@dhcp43-155 ~]#

Comment 7 Sweta Anandpara 2015-06-22 04:45:04 UTC
Verified this again on the next build glusterfs-3.7.1-4.el6rhs.x86_64

Volume deletion does not delete the session entry from glusterfind list. Moving this bug back to assigned for 3.1 everglades. Pasted below are the steps executed on a newly created volume.

[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v list
gluster_shared_storage
ozone
[root@dhcp43-191 ~]# gluster v create demo
Usage: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
[root@dhcp43-191 ~]# gluster v create demo 10.70.43.191:/rhs/thinbrick1/demo 10.70.42.202:/rhs/thinbrick1/demo
volume create: demo: success: please start the volume to access data
[root@dhcp43-191 ~]# gluster v info demo
 
Volume Name: demo
Type: Distribute
Volume ID: 8dd51098-6346-486a-ab7a-0b128fd41420
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/demo
Brick2: 10.70.42.202:/rhs/thinbrick1/demo
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-191 ~]# gluster v start demo
volume start: demo: success
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info demo
 
Volume Name: demo
Type: Distribute
Volume ID: 8dd51098-6346-486a-ab7a-0b128fd41420
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/demo
Brick2: 10.70.42.202:/rhs/thinbrick1/demo
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-191 ~]# gluster v status demo
Status of volume: demo
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.191:/rhs/thinbrick1/demo     49155     0          Y       23418
Brick 10.70.42.202:/rhs/thinbrick1/demo     49155     0          Y       10843
NFS Server on localhost                     2049      0          Y       23438
NFS Server on 10.70.42.30                   2049      0          Y       481  
NFS Server on 10.70.42.202                  2049      0          Y       10864
NFS Server on 10.70.42.147                  2049      0          Y       7364 
 
Task Status of Volume demo
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-19 23:20:19      
sesso5                    ozone                     2015-06-20 00:18:03      
sesso2                    ozone                     2015-06-19 22:44:40      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# glusterfind list --help
usage: glusterfind list [-h] [--session SESSION] [--volume VOLUME] [--debug]

optional arguments:
  -h, --help         show this help message and exit
  --session SESSION  Session Name
  --volume VOLUME    Volume Name
  --debug            Debug
[root@dhcp43-191 ~]# glusterfind list --volume ozone
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-19 23:20:19      
sesso5                    ozone                     2015-06-20 00:18:03      
sesso2                    ozone                     2015-06-19 22:44:40      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-19 23:20:19      
sesso5                    ozone                     2015-06-20 00:18:03      
sesso2                    ozone                     2015-06-19 22:44:40      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# glusterfind create demosess demo
Session demosess created with volume demo
[root@dhcp43-191 ~]# glutserfind list
-bash: glutserfind: command not found
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-19 23:20:19      
sesso5                    ozone                     2015-06-20 00:18:03      
sesso2                    ozone                     2015-06-19 22:44:40      
demosess                  demo                      2015-06-22 15:44:13      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v stop demo
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: demo: success
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v delete demo
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: demo: success
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso3                    ozone                     2015-06-18 16:27:30      
sesso1                    ozone                     2015-06-19 23:20:19      
sesso5                    ozone                     2015-06-20 00:18:03      
sesso2                    ozone                     2015-06-19 22:44:40      
demosess                  demo                      2015-06-22 15:44:13      
sesso4                    ozone                     2015-06-18 16:27:38      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/demosess/demo/
%2Frhs%2Fthinbrick1%2Fdemo.status  demosess_demo_secret.pem  demosess_demo_secret.pem.pub  status
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# rpm -qa | grep glusterfs
glusterfs-libs-3.7.1-4.el6rhs.x86_64
glusterfs-api-3.7.1-4.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-4.el6rhs.x86_64
glusterfs-fuse-3.7.1-4.el6rhs.x86_64
glusterfs-cli-3.7.1-4.el6rhs.x86_64
glusterfs-3.7.1-4.el6rhs.x86_64
glusterfs-server-3.7.1-4.el6rhs.x86_64
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]#

Comment 8 Sweta Anandpara 2015-06-22 12:23:18 UTC
The fix of this would be necessary for the glusterfind functionality to be complete - as it would result in imprecise/untrue display of session information to the user.
Proposing it as a blocker.

Comment 9 Aravinda VK 2015-06-22 23:01:43 UTC
Please upload the sosreports. I am looking for glusterd log in /var/log/glusterfs/etc-*.log

Comment 10 Sweta Anandpara 2015-06-23 04:34:39 UTC
Sosreport updated at: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1224064/

Comment 14 Sweta Anandpara 2015-06-26 09:34:25 UTC
Would like to update on a puzzling behavior that is seen:

I am able to reproduce this consistently on glusterfs-3.7.1-4.el6rhs.x86_64, but NOT on the very next build glusterfs-3.7.1-5.el6rhs.x86_64.

Unable to reason it out, as there have been no glusterfind patches merged between 3.7.1-4 and 3.7.1-5.

Nevertheless, pasted below are the errors that are seen in the logs:

These errors are seen in both the builds, irrespective of glusterfind entry persisting or not after volume delete.

[2015-06-26 13:03:30.310186] I [mem-pool.c:604:mem_pool_destroy] 0-management: size=124 max=1 total=1
[2015-06-26 13:03:40.208452] E [run.c:190:runner_log] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fa6103fdfc0] (--> /usr/lib64/libglusterfs.so.0(runner_log+0x105)[0x7fa61044ed35] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x450)[0x7fa604ea6500] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(+0xe2852)[0x7fa604ea6852] (--> /lib64/libpthread.so.0(+0x3fbe007a51)[0x7fa60f4e9a51] ))))) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py --volname=ozone
[2015-06-26 13:03:40.211673] E [run.c:190:runner_log] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fa6103fdfc0] (--> /usr/lib64/libglusterfs.so.0(runner_log+0x105)[0x7fa61044ed35] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x450)[0x7fa604ea6500] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(+0xe2852)[0x7fa604ea6852] (--> /lib64/libpthread.so.0(+0x3fbe007a51)[0x7fa60f4e9a51] ))))) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.pyc --volname=ozone
[2015-06-26 13:03:40.215346] E [run.c:190:runner_log] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7fa6103fdfc0] (--> /usr/lib64/libglusterfs.so.0(runner_log+0x105)[0x7fa61044ed35] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x450)[0x7fa604ea6500] (--> /usr/lib64/glusterfs/3.7.1/xlator/mgmt/glusterd.so(+0xe2852)[0x7fa604ea6852] (--> /lib64/libpthread.so.0(+0x3fbe007a51)[0x7fa60f4e9a51] ))))) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.pyo --volname=ozone


###############       3.7.1-4       #######################

Sosreports copied at:  http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1224064/

[root@dhcp42-236 ~]# gluster  v list
testvol
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# gluster v create ozone replica 2 10.70.42.236:/rhs/thinbrick1/ozone 10.70.43.163:/rhs/thinbrick1/ozone 10.70.42.236:/rhs/thinbrick2/ozone 10.70.43.163:/rhs/thinbrick2/ozone
volume create: ozone: success: please start the volume to access data
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 4c715c70-e7f7-4235-9cc2-303858558d65
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.236:/rhs/thinbrick1/ozone
Brick2: 10.70.43.163:/rhs/thinbrick1/ozone
Brick3: 10.70.42.236:/rhs/thinbrick2/ozone
Brick4: 10.70.43.163:/rhs/thinbrick2/ozone
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# gluster v start ozone
volume start: ozone: success
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# glutserfind list
-bash: glutserfind: command not found
[root@dhcp42-236 ~]# glusterfind list
No sessions found
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# glusterfind create
usage: glusterfind create [-h] [--debug] [--force] [--reset-session-time]
                          session volume
glusterfind create: error: too few arguments
[root@dhcp42-236 ~]# glusterfind create sesso1 ozone
Session sesso1 created with volume ozone
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso1                    ozone                     2015-06-26 18:31:43      
[root@dhcp42-236 ~]# ls /var/lib/glusterd/glusterfind/
.keys/  sesso1/ 
[root@dhcp42-236 ~]# ls /var/lib/glusterd/glusterfind/sesso1
ozone
[root@dhcp42-236 ~]# ls /var/lib/glusterd/glusterfind/sesso1/ozone
%2Frhs%2Fthinbrick1%2Fozone.status  %2Frhs%2Fthinbrick2%2Fozone.status  sesso1_ozone_secret.pem  sesso1_ozone_secret.pem.pub  status
[root@dhcp42-236 ~]# glusterfind pre sesso1 ozone
usage: glusterfind pre [-h] [--debug] [--full] [--disable-partial]
                       [--output-prefix OUTPUT_PREFIX] [--regenerate-outfile]
                       [-N]
                       session volume outfile
glusterfind pre: error: too few arguments
[root@dhcp42-236 ~]# glusterfind pre sesso1 ozone /tmp/outo1.txt
The authenticity of host '10.70.42.236 (10.70.42.236)' can't be established.
RSA key fingerprint is 4e:3f:ca:6f:0c:f8:fb:3d:79:9a:e0:de:3b:13:7f:69.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '10.70.42.236 (10.70.42.236)' can't be established.
RSA key fingerprint is 4e:3f:ca:6f:0c:f8:fb:3d:79:9a:e0:de:3b:13:7f:69.
Are you sure you want to continue connecting (yes/no)? yes

10.70.42.236 - Copy command failed: Host key verification failed.

Generated output file /tmp/outo1.txt
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# cat /tmp/outo1.txt 
MODIFY .trashcan%2F 
NEW test1 
NEW dir1 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: 4c715c70-e7f7-4235-9cc2-303858558d65
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.236:/rhs/thinbrick1/ozone
Brick2: 10.70.43.163:/rhs/thinbrick1/ozone
Brick3: 10.70.42.236:/rhs/thinbrick2/ozone
Brick4: 10.70.43.163:/rhs/thinbrick2/ozone
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# gluster v stop ozone
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: ozone: success
[root@dhcp42-236 ~]# gluster v delete ozone
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: ozone: success
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# gluster v list
testvol
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# 
[root@dhcp42-236 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso1                    ozone                     2015-06-26 18:31:43      
[root@dhcp42-236 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso1                    ozone                     2015-06-26 18:31:43      
[root@dhcp42-236 ~]# vi /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
[root@dhcp42-236 ~]# ls /var/lib/glusterd/glusterfind/sesso1/ozone/
%2Frhs%2Fthinbrick1%2Fozone.status      %2Frhs%2Fthinbrick2%2Fozone.status      sesso1_ozone_secret.pem      status
%2Frhs%2Fthinbrick1%2Fozone.status.pre  %2Frhs%2Fthinbrick2%2Fozone.status.pre  sesso1_ozone_secret.pem.pub  status.pre
[root@dhcp42-236 ~]#

##################         3.7.1-5    ############################

[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v create ozone replica 2 10.70.43.191:/rhs/thinbrick1/ozone 10.70.42.202:/rhs/thinbrick1/ozone 10.70.42.30:/rhs/thinbrick1/ozone 10.70.42.147:/rhs/thinbrick1/ozone
volume create: ozone: success: please start the volume to access data
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v list
cross3
gluster_shared_storage
ozone
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v info ozone
 
Volume Name: ozone
Type: Distributed-Replicate
Volume ID: a31b424a-1897-4995-9758-6b0cfa97cc43
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.191:/rhs/thinbrick1/ozone
Brick2: 10.70.42.202:/rhs/thinbrick1/ozone
Brick3: 10.70.42.30:/rhs/thinbrick1/ozone
Brick4: 10.70.42.147:/rhs/thinbrick1/ozone
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# gluster v start ozone
volume start: ozone: success
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
cross3s1                  cross3                    2015-06-26 17:44:52      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-25 18:30:24      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# glusterfind create sesso1 ozone
Session sesso1 created with volume ozone
[root@dhcp43-191 ~]# glusterfind create sesso2 ozone
Session sesso2 created with volume ozone
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso1                    ozone                     2015-06-26 18:37:52      
cross3s1                  cross3                    2015-06-26 17:44:52      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-25 18:30:24      
sesso2                    ozone                     2015-06-26 18:37:59      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/
cross3s1/ cross3s2/ cross3s3/ .keys/    sesso1/   sesso2/   
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/sesso1/ozone/
%2Frhs%2Fthinbrick1%2Fozone.status  sesso1_ozone_secret.pem  sesso1_ozone_secret.pem.pub  status
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/
cross3s1/ cross3s2/ cross3s3/ .keys/    sesso1/   sesso2/   
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/sesso1/ozone/
%2Frhs%2Fthinbrick1%2Fozone.status  sesso1_ozone_secret.pem             sesso1_ozone_secret.pem.pub         status                              
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/sesso1/ozone/
%2Frhs%2Fthinbrick1%2Fozone.status  sesso1_ozone_secret.pem  sesso1_ozone_secret.pem.pub  status
[root@dhcp43-191 ~]# 
root@dhcp43-191 ~]# cd /var/lib/glusterd/glusterfind/sesso1/ozone/
[root@dhcp43-191 ozone]# ls -a
.  ..  %2Frhs%2Fthinbrick1%2Fozone.status  sesso1_ozone_secret.pem  sesso1_ozone_secret.pem.pub  status
[root@dhcp43-191 ozone]# cd ..
[root@dhcp43-191 sesso1]# ls -a
.  ..  ozone
[root@dhcp43-191 sesso1]# cd ..
[root@dhcp43-191 glusterfind]# ls -a
.  ..  cross3s1  cross3s2  cross3s3  .keys  sesso1  sesso2
[root@dhcp43-191 glusterfind]# cd sesso2
[root@dhcp43-191 sesso2]# ls -a
.  ..  ozone
[root@dhcp43-191 sesso2]# cd ozone
[root@dhcp43-191 ozone]# ls -a
.  ..  %2Frhs%2Fthinbrick1%2Fozone.status  sesso2_ozone_secret.pem  sesso2_ozone_secret.pem.pub  status
[root@dhcp43-191 ozone]# 
[root@dhcp43-191 ozone]# 
[root@dhcp43-191 ozone]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso1                    ozone                     2015-06-26 18:37:52      
cross3s1                  cross3                    2015-06-26 17:44:52      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-25 18:30:24      
sesso2                    ozone                     2015-06-26 18:37:59      
[root@dhcp43-191 ozone]# 
[root@dhcp43-191 ozone]# 
[root@dhcp43-191 ozone]# glusterfind pre sesso1 ozone
usage: glusterfind pre [-h] [--debug] [--full] [--disable-partial]
                       [--output-prefix OUTPUT_PREFIX] [--regenerate-outfile]
                       [-N]
                       session volume outfile
glusterfind pre: error: too few arguments
[root@dhcp43-191 ozone]# glusterfind pre sesso1 ozone /tmp/outo1.txt 
Generated output file /tmp/outo1.txt
[root@dhcp43-191 ozone]# 
[root@dhcp43-191 ozone]# 
[root@dhcp43-191 ozone]# 
[root@dhcp43-191 ozone]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sesso1                    ozone                     2015-06-26 18:37:52      
cross3s1                  cross3                    2015-06-26 17:44:52      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-25 18:30:24      
sesso2                    ozone                     2015-06-26 18:37:59      
[root@dhcp43-191 ozone]# cat /tmp/outo
outo1.txt  outo2.txt  outo3.txt  outo4.txt  outo5.txt  
[root@dhcp43-191 ozone]# cat /tmp/outo1.txt 
NEW test1 
NEW dir1 
NEW dir1%2F%2Fa 
[root@dhcp43-191 ozone]# 
[root@dhcp43-191 ozone]# cd
[root@dhcp43-191 ~]# gluster v stop ozone
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: ozone: success
[root@dhcp43-191 ~]# gluster v delete ozone
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: ozone: success
[root@dhcp43-191 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
cross3s1                  cross3                    2015-06-26 17:44:52      
cross3s3                  cross3                    2015-06-23 17:55:28      
cross3s2                  cross3                    2015-06-25 18:30:24      
[root@dhcp43-191 ~]# 
[root@dhcp43-191 ~]# ls /var/lib/glusterd/glusterfind/
[root@dhcp43-191 ~]#

Comment 16 Sweta Anandpara 2015-07-04 05:23:29 UTC
Hit this again on build 3.7.1-6 while verifying another bug 1224880. 

Did a few volume starts and stops before actually deleting the volume. The glusterfind session that was already created did not get removed after deleting the volume. 

Sosreports are copied at: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1224064/3_7_1_6/

Pasted below are the logs:

[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster  v list
gv1
slave
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# rpm -qa | grep glusterfs
glusterfs-client-xlators-3.7.1-6.el6rhs.x86_64
glusterfs-server-3.7.1-6.el6rhs.x86_64
glusterfs-3.7.1-6.el6rhs.x86_64
glusterfs-api-3.7.1-6.el6rhs.x86_64
glusterfs-cli-3.7.1-6.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-6.el6rhs.x86_64
glusterfs-libs-3.7.1-6.el6rhs.x86_64
glusterfs-fuse-3.7.1-6.el6rhs.x86_64
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# cd /rhs/thinbrick2
[root@dhcp43-93 thinbrick2]# ls
nash  ozone  pluto  slave  vol1
[root@dhcp43-93 thinbrick2]# rm -rf vol1
[root@dhcp43-93 thinbrick2]# rm -rf pluto
[root@dhcp43-93 thinbrick2]# rm -rf nash
[root@dhcp43-93 thinbrick2]# ls
ozone  slave
[root@dhcp43-93 thinbrick2]# gluster v list
gv1
slave
[root@dhcp43-93 thinbrick2]# rm -rf ozone
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# ls -a
.  ..  slave
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# cd
[root@dhcp43-93 ~]# 
[root@dhcp43-93 thinbrick2]# ls
slave
[root@dhcp43-93 thinbrick2]# gluster  v create vol1 10.70.43.93:/rhs/thinbrick1/vol1 10.70.43.155:/rhs/thinbrick1/vol1 10.70.43.93:/rhs/thinbrick2/vol1 10.70.43.155:/rhs/thinbrick2/vol1
volume create: vol1: success: please start the volume to access data
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# gluster v status
Status of volume: gv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/b1                   49154     0          Y       13880
NFS Server on localhost                     2049      0          Y       13881
NFS Server on 10.70.43.155                  2049      0          Y       23445
 
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: slave
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.43.93:/rhs/thinbrick1/slave     49152     0          Y       13892
Brick 10.70.43.155:/rhs/thinbrick1/slave    49152     0          Y       23444
Brick 10.70.43.93:/rhs/thinbrick2/slave     49153     0          Y       13901
Brick 10.70.43.155:/rhs/thinbrick2/slave    49153     0          Y       23455
NFS Server on localhost                     2049      0          Y       13881
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
NFS Server on 10.70.43.155                  2049      0          Y       23445
Self-heal Daemon on 10.70.43.155            N/A       N/A        N       N/A  
 
Task Status of Volume slave
------------------------------------------------------------------------------
There are no active volume tasks
 
Volume vol1 is not started
 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glutser v info vol1
-bash: glutser: command not found
[root@dhcp43-93 thinbrick2]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
performance.readdir-ahead: on
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind create sv1 vol1 
Volume vol1 is not online
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# glusterfind create fdsfds vol1
Volume vol1 is not online
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# 
[root@dhcp43-93 thinbrick2]# ls /var/lib/glusterd/glusterfind/
ss1  ss2
[root@dhcp43-93 thinbrick2]# ls
slave  vol1
[root@dhcp43-93 thinbrick2]# cd
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/
cli.log  fdsfds  nash  plutos1  ps1  ps2  ps3  sess21  sessn1  sessn2  sessn3  sessn4  sesso1  sesso2  sesso3  sessp1  sessp2  sessv1  sgv1  ss1  ss2  sumne  sv1  vol1s1  vol1s2  vol1s3
[root@dhcp43-93 ~]# cat /var/log/glusterfs/glusterfind/sv1/vol1/cli.log 
[2015-07-04 15:48:32,839] ERROR [utils - 152:fail] - Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv1 vol1
Session sv1 created with volume vol1
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2  sv1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1
vol1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v status vol1
Volume vol1 is not started
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv2 vol1
Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Volume vol1 is not online
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
.keys/ ss1/   ss2/   sv1/   
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status                             sv1_vol1_secret.pem                sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.t
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Pre script is not run
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1^C
[root@dhcp43-93 ~]# glusterfind delete sv2 vol1
Invalid session sv2
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.txt
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:50:02      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
.keys/ ss1/   ss2/   sv1/   
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status      %2Frhs%2Fthinbrick2%2Fvol1.status      status                                 sv1_vol1_secret.pem
%2Frhs%2Fthinbrick1%2Fvol1.status.pre  %2Frhs%2Fthinbrick2%2Fvol1.status.pre  status.pre                             sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status      %2Frhs%2Fthinbrick2%2Fvol1.status      status      sv1_vol1_secret.pem
%2Frhs%2Fthinbrick1%2Fvol1.status.pre  %2Frhs%2Fthinbrick2%2Fvol1.status.pre  status.pre  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind pre sv2 vol1 /tmp/out.txt
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1
root.43.155's password: root.43.155's password: 


root.43.155's password: 
root.43.155's password: 
10.70.43.155 - delete failed: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Command delete failed in 10.70.43.155:/rhs/thinbrick1/vol1
[root@dhcp43-93 ~]# glusterfind lsit
usage: glusterfind [-h] {pre,create,list,post,delete} ...
glusterfind: error: argument mode: invalid choice: 'lsit' (choose from 'pre', 'create', 'list', 'post', 'delete')
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# glusterfind lsit
usage: glusterfind [-h] {pre,create,list,post,delete} ...
glusterfind: error: argument mode: invalid choice: 'lsit' (choose from 'pre', 'create', 'list', 'post', 'delete')
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/
cli.log  nash/    ps1/     ps3/     sessn1/  sessn3/  sesso1/  sesso3/  sessp2/  sgv1/    ss2/     sv1/     vol1s1/  vol1s3/  
fdsfds/  plutos1/ ps2/     sess21/  sessn2/  sessn4/  sesso2/  sessp1/  sessv1/  ss1/     sumne/   sv2/     vol1s2/  
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/sv1/vol1/c
changelog.1c27a488a584181d698698190ce633eae6ab4a90.log  changelog.log                                           
changelog.b85984854053ba4529aeaba8bd2c93408cb68773.log  cli.log                                                 
[root@dhcp43-93 ~]# ls /var/log/glusterfs/glusterfind/sv1/vol1/^C
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind create sv1 vol1
glutSession sv1 created with volume vol1
[root@dhcp43-93 ~]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Volume vol1 is not online
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Pre script is not run
[root@dhcp43-93 ~]# gluster v start vol1
volume start: vol1: success
[root@dhcp43-93 ~]# glusterfind pre sv1 vol1 /tmp/out.txt
Generated output file /tmp/out.txt
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# gluster v  info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind post sv1 vol1
Session sv1 with volume vol1 updated
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status                             sv1_vol1_secret.pem                sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/^C
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# rpm -qa | grep gluster
glusterfs-client-xlators-3.7.1-6.el6rhs.x86_64
glusterfs-server-3.7.1-6.el6rhs.x86_64
glusterfs-3.7.1-6.el6rhs.x86_64
glusterfs-api-3.7.1-6.el6rhs.x86_64
glusterfs-cli-3.7.1-6.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-6.el6rhs.x86_64
glusterfs-libs-3.7.1-6.el6rhs.x86_64
glusterfs-fuse-3.7.1-6.el6rhs.x86_64
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# volume stop vol1
-bash: volume: command not found
[root@dhcp43-93 ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: failed: Volume vol1 is not in the started state
[root@dhcp43-93 ~]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 8918e433-d903-4bb8-80c2-42a1b5a0244e
Status: Stopped
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.93:/rhs/thinbrick1/vol1
Brick2: 10.70.43.155:/rhs/thinbrick1/vol1
Brick3: 10.70.43.93:/rhs/thinbrick2/vol1
Brick4: 10.70.43.155:/rhs/thinbrick2/vol1
Options Reconfigured:
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-93 ~]# gluster v delete vol1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: vol1: success
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sv1                       vol1                      2015-07-04 15:58:10      
ss2                       slave                     2015-06-27 00:08:39      
ss1                       slave                     2015-06-27 00:25:26      
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/
ss1  ss2  sv1
[root@dhcp43-93 ~]# ls /var/lib/glusterd/glusterfind/sv1/vol1/
%2Frhs%2Fthinbrick1%2Fvol1.status  %2Frhs%2Fthinbrick2%2Fvol1.status  status  sv1_vol1_secret.pem  sv1_vol1_secret.pem.pub
[root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# 
root@dhcp43-93 ~]# 
[root@dhcp43-93 ~]# glusterfind delete sv2 vol1
Invalid session sv2
[root@dhcp43-93 ~]# glusterfind delete sv1 vol1
Unable to get volume details
[root@dhcp43-93 ~]#

Comment 19 monti lawrence 2015-07-22 20:18:59 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 23 Aravinda VK 2015-08-18 09:48:40 UTC
Patch posted upstream
http://review.gluster.org/#/c/11298/

Comment 28 Aravinda VK 2015-11-18 10:57:28 UTC
This patch is required in downstream
http://review.gluster.org/#/c/11298

Comment 29 Saravanakumar 2015-11-20 07:06:34 UTC
(In reply to Aravinda VK from comment #28)
> This patch is required in downstream
> http://review.gluster.org/#/c/11298

Please ignore this comment, This specific patch is already in downstream - please refer commit below:
------------------------
commit 55b44094436bc8630b6c3ff2d232e6551d40630c
Author: Niels de Vos <ndevos>
Date:   Thu Jun 18 00:21:59 2015 +0200

    rpm: include required directory for glusterfind
------------------------

Comment 30 Anil Shah 2015-12-04 07:20:20 UTC
Able to reproduce this bug on build glusterfs-3.7.5-8.el7rhgs.x86_64.
Moving bug to assigned state.

[root@rhs001 ~]# gluster v info
No volumes present
[root@rhs001 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sess_vol0                 vol0                      2015-12-03 23:50:30

Comment 31 Aravinda VK 2015-12-04 10:47:03 UTC
(In reply to Anil Shah from comment #30)
> Able to reproduce this bug on build glusterfs-3.7.5-8.el7rhgs.x86_64.
> Moving bug to assigned state.
> 
> [root@rhs001 ~]# gluster v info
> No volumes present
> [root@rhs001 ~]# glusterfind list
> SESSION                   VOLUME                    SESSION TIME             
> ---------------------------------------------------------------------------
> sess_vol0                 vol0                      2015-12-03 23:50:30

Please post complete steps. (glusterfind session creation, volume delete etc)
Also attach glusterd log from /var/log/glusterfs/etc-*.log from all Volume nodes.

Comment 33 Anil Shah 2015-12-07 10:02:50 UTC
[root@rhs001 yum.repos.d]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
sess_new1                 newvol                    2015-12-07 19:34:06      
[root@rhs001 yum.repos.d]# gluster v stop newvol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: newvol: success
[root@rhs001 yum.repos.d]# gluster v delete newvol
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: newvol: success
[root@rhs001 yum.repos.d]# gluster v info
No volumes present
[root@rhs001 yum.repos.d]# glusterfind list
No sessions found

Bug verified on build glusterfs-3.7.5-9.el7rhgs.x86_64

Comment 35 errata-xmlrpc 2016-03-01 05:23:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.