+++ This bug was initially created as a clone of Bug #1225465 +++ +++ This bug was initially created as a clone of Bug #1224064 +++ Description of problem: When a volume is created and a corresponding glusterfind session, glusterfind list displays the entry of the session name and volume name. After deletion of the volume, the session entry should get removed from the display output of glusterfind list. Glusterfind list should always have only the list of active glusterfind sessions that are present in that cluster. Version-Release number of selected component (if applicable): How reproducible: Always Additional info: [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# gluster v info Volume Name: nash Type: Distributed-Replicate Volume ID: ef2333ce-e513-43df-8306-fec77cc479b4 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.43.140:/rhs/thinbrick1/nash/dd Brick2: 10.70.42.75:/rhs/thinbrick1/dd Brick3: 10.70.43.140:/rhs/thinbrick2/nash/dd Brick4: 10.70.42.75:/rhs/thinbrick2/dd Options Reconfigured: performance.readdir-ahead: on Volume Name: vol1 Type: Distributed-Replicate Volume ID: 44f06391-1635-4897-98c2-848e5ae92640 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.43.140:/rhs/brick1/dd Brick2: 10.70.42.75:/rhs/brick1/dd Brick3: 10.70.43.140:/rhs/brick2/dd Brick4: 10.70.42.75:/rhs/brick2/dd Options Reconfigured: changelog.changelog: on storage.build-pgfid: on performance.readdir-ahead: on [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- svol1 vol1 2015-05-22 17:02:01 snash nash 2015-05-22 17:02:58 sess1 ozone Session Corrupted [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# gluster v stop nash Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: nash: success [root@dhcp43-140 ~]# gluster v delete nash Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y volume delete: nash: success [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# gluster v info Volume Name: vol1 Type: Distributed-Replicate Volume ID: 44f06391-1635-4897-98c2-848e5ae92640 Status: Stopped Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.43.140:/rhs/brick1/dd Brick2: 10.70.42.75:/rhs/brick1/dd Brick3: 10.70.43.140:/rhs/brick2/dd Brick4: 10.70.42.75:/rhs/brick2/dd Options Reconfigured: changelog.changelog: on storage.build-pgfid: on performance.readdir-ahead: on [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# glusterfind list SESSION VOLUME SESSION TIME --------------------------------------------------------------------------- svol1 vol1 2015-05-22 17:02:01 snash nash 2015-05-22 17:02:58 sess1 ozone Session Corrupted [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# [root@dhcp43-140 ~]# --- Additional comment from Aravinda VK on 2015-05-27 04:02:07 EDT --- glusterfind is a independent tool which does not know when volume is deleted. We need to run `glusterfind delete` command before volume delete. --- Additional comment from Anand Avati on 2015-05-27 08:59:10 EDT --- REVIEW: http://review.gluster.org/10944 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#1) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-05-29 03:34:23 EDT --- REVIEW: http://review.gluster.org/10944 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#2) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-06-02 03:22:48 EDT --- REVIEW: http://review.gluster.org/10944 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#3) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-06-04 03:14:31 EDT --- REVIEW: http://review.gluster.org/10944 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#4) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-06-04 04:06:13 EDT --- REVIEW: http://review.gluster.org/10944 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#5) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-06-04 06:16:52 EDT --- REVIEW: http://review.gluster.org/10944 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#6) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-06-09 04:36:25 EDT --- REVIEW: http://review.gluster.org/10944 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#8) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-06-10 12:43:38 EDT --- REVIEW: http://review.gluster.org/10944 (tools/glusterfind: Cleanup glusterfind dir after a volume delete) posted (#9) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-06-12 06:01:21 EDT --- COMMIT: http://review.gluster.org/10944 committed in master by Vijay Bellur (vbellur) ------ commit d28b226131d420070fa5cee921a4ad0be9d6446a Author: Aravinda VK <avishwan> Date: Wed May 27 18:05:35 2015 +0530 tools/glusterfind: Cleanup glusterfind dir after a volume delete If `glusterfind delete` command was not run before volume delete, stale session directories exists in /var/lib/glusterd/glusterfind directories. Also shows these sessions in `glusterfind list` When Volume is deleted, Post hook will be run which cleans up the stale session directories BUG: 1225465 Change-Id: I54c46c30313e92c1bb4cb07918ed2029b375462c Signed-off-by: Aravinda VK <avishwan> Reviewed-on: http://review.gluster.org/10944 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Kotresh HR <khiremat> Reviewed-by: Vijay Bellur <vbellur> --- Additional comment from Anand Avati on 2015-06-17 18:25:24 EDT --- REVIEW: http://review.gluster.org/11298 (rpm: include required directory for glusterfind) posted (#1) for review on master by Niels de Vos (ndevos) --- Additional comment from Anand Avati on 2015-06-18 04:51:26 EDT --- REVIEW: http://review.gluster.org/11298 (rpm: include required directory for glusterfind) posted (#2) for review on master by Niels de Vos (ndevos) --- Additional comment from Anand Avati on 2015-08-18 05:31:32 EDT --- REVIEW: http://review.gluster.org/11298 (rpm: include required directory for glusterfind) posted (#3) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-08-19 01:21:43 EDT --- REVIEW: http://review.gluster.org/11298 (rpm: include required directory for glusterfind) posted (#4) for review on master by Aravinda VK (avishwan) --- Additional comment from Anand Avati on 2015-08-19 06:33:17 EDT --- COMMIT: http://review.gluster.org/11298 committed in master by Kaleb KEITHLEY (kkeithle) ------ commit 454bd09b8befc27552591855a8d02a0ad19877d9 Author: Niels de Vos <ndevos> Date: Thu Jun 18 00:21:59 2015 +0200 rpm: include required directory for glusterfind The directory was marked as %ghost, which causes the following installation failure: Error unpacking rpm package glusterfs-server-3.8dev-0.446.git45e13fe.el7.centos.x86_64 error: unpacking of archive failed on file /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py;5581f20e: cpio: open Also, *all* Python files should be part of the RPM package. This includes generated .pyc and .pyo files. BUG: 1225465 Change-Id: Iee74905b101912c4a845257742c470c3fe42ce2a Signed-off-by: Niels de Vos <ndevos> Signed-off-by: Aravinda VK <avishwan> Reviewed-on: http://review.gluster.org/11298 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Kaleb KEITHLEY <kkeithle> --- Additional comment from Anand Avati on 2015-08-24 06:45:02 EDT --- REVIEW: http://review.gluster.org/12000 (rpm: include required directory for glusterfind) posted (#1) for review on release-3.7 by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/12000 (rpm: include required directory for glusterfind) posted (#2) for review on release-3.7 by Aravinda VK (avishwan)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.4, please open a new bug report. glusterfs-3.7.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12496 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user