Bug 1225551 - [Backup]: Glusterfind session entry persists even after volume is deleted
Summary: [Backup]: Glusterfind session entry persists even after volume is deleted
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterfind
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: ---
Assignee: Aravinda VK
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1225465 1256307
Blocks: glusterfs-3.7.2
TreeView+ depends on / blocked
 
Reported: 2015-05-27 16:25 UTC by Aravinda VK
Modified: 2015-08-24 10:47 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.2
Clone Of: 1225465
Environment:
Last Closed: 2015-06-20 09:48:37 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Aravinda VK 2015-05-27 16:25:23 UTC
+++ This bug was initially created as a clone of Bug #1225465 +++

+++ This bug was initially created as a clone of Bug #1224064 +++

Description of problem:
When a volume is created and a corresponding glusterfind session, glusterfind list displays the entry of the session name and volume name. After deletion of the volume, the session entry should get removed from the display output of glusterfind list. Glusterfind list should always have only the list of active glusterfind sessions that are present in that cluster.


Version-Release number of selected component (if applicable):


How reproducible:
Always


Additional info:


[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v info
 
Volume Name: nash
Type: Distributed-Replicate
Volume ID: ef2333ce-e513-43df-8306-fec77cc479b4
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.140:/rhs/thinbrick1/nash/dd
Brick2: 10.70.42.75:/rhs/thinbrick1/dd
Brick3: 10.70.43.140:/rhs/thinbrick2/nash/dd
Brick4: 10.70.42.75:/rhs/thinbrick2/dd
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: vol1
Type: Distributed-Replicate
Volume ID: 44f06391-1635-4897-98c2-848e5ae92640
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.140:/rhs/brick1/dd
Brick2: 10.70.42.75:/rhs/brick1/dd
Brick3: 10.70.43.140:/rhs/brick2/dd
Brick4: 10.70.42.75:/rhs/brick2/dd
Options Reconfigured:
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
svol1                     vol1                      2015-05-22 17:02:01      
snash                     nash                      2015-05-22 17:02:58      
sess1                     ozone                     Session Corrupted        
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v stop nash
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: nash: success
[root@dhcp43-140 ~]# gluster v delete nash
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: nash: success
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v info
 
Volume Name: vol1
Type: Distributed-Replicate
Volume ID: 44f06391-1635-4897-98c2-848e5ae92640
Status: Stopped
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.140:/rhs/brick1/dd
Brick2: 10.70.42.75:/rhs/brick1/dd
Brick3: 10.70.43.140:/rhs/brick2/dd
Brick4: 10.70.42.75:/rhs/brick2/dd
Options Reconfigured:
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# glusterfind list
SESSION                   VOLUME                    SESSION TIME             
---------------------------------------------------------------------------
svol1                     vol1                      2015-05-22 17:02:01      
snash                     nash                      2015-05-22 17:02:58      
sess1                     ozone                     Session Corrupted        
[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# 

[root@dhcp43-140 ~]#


--- Additional comment from Aravinda VK on 2015-05-27 04:02:07 EDT ---

glusterfind is a independent tool which does not know when volume is deleted. We need to run `glusterfind delete` command before volume delete.

--- Additional comment from Anand Avati on 2015-05-27 08:59:10 EDT ---

REVIEW: http://review.gluster.org/10944 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#1) for review on master by Aravinda VK (avishwan)

Comment 1 Niels de Vos 2015-06-02 08:20:21 UTC
The required changes to fix this bug have not made it into glusterfs-3.7.1. This bug is now getting tracked for glusterfs-3.7.2.

Comment 2 Anand Avati 2015-06-11 15:49:49 UTC
REVIEW: http://review.gluster.org/11186 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#2) for review on release-3.7 by Aravinda VK (avishwan)

Comment 3 Niels de Vos 2015-06-20 09:48:37 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report.

glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.