Bug 1129702 - [Dist-geo-rep]: While rm -rf on master mount-point, the shutting down, wait and bringing up active node results few files not getting removed from slave.
Summary: [Dist-geo-rep]: While rm -rf on master mount-point, the shutting down, wait a...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Aravinda VK
QA Contact:
URL:
Whiteboard:
Depends On: 1129218
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-13 13:42 UTC by Aravinda VK
Modified: 2015-05-14 17:35 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1129218
Environment:
Last Closed: 2015-05-14 17:26:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Aravinda VK 2014-08-13 13:42:41 UTC
+++ This bug was initially created as a clone of Bug #1129218 +++

Description of problem: While rm -rf on master mount-point, the shutting down, wait and bringing up active node results few files not getting removed from slave. 

After the active nodes reboots, some of the gsyncd's get stuck trying to delete a directory which is not empty. The files which were supposed to be deleted from that directory had no entry in the changelog of the active node which was rebooted. For some reason we are not capturing the deletes happened in self-heal in changelog. 

Version-Release number of selected component (if applicable): glusterfs-3.6.0.27-1


How reproducible: Happens most of the times. 


Steps to Reproduce:
1. create a geo-rep relationship between master and slave.
2. create some data on master and let it sync to slave
3. then start removing the files created using rm -rf.
4. parallely shutdown one of the active nodes. 
5. After some time bring back the active node/
4. check if it has completely removed from the slave. 

Actual results: While rm -rf on master mount-point, the shutting down, wait and bringing up active node results few files not getting removed from slave.


Expected results: Even if the nodes were rebooted, it should be able to remove files from the slave.


Additional info:

Comment 1 Anand Avati 2014-08-13 13:45:53 UTC
REVIEW: http://review.gluster.org/8477 (geo-rep: Handle RMDIR recursively) posted (#2) for review on master by Aravinda VK (avishwan@redhat.com)

Comment 2 Anand Avati 2014-08-13 14:04:26 UTC
REVIEW: http://review.gluster.org/8477 (geo-rep: Handle RMDIR recursively) posted (#3) for review on master by Aravinda VK (avishwan@redhat.com)

Comment 3 Anand Avati 2014-08-15 02:56:49 UTC
COMMIT: http://review.gluster.org/8477 committed in master by Venky Shankar (vshankar@redhat.com) 
------
commit 2510af16744f7825c91bed4507968181050bbf88
Author: Aravinda VK <avishwan@redhat.com>
Date:   Tue Aug 12 18:19:30 2014 +0530

    geo-rep: Handle RMDIR recursively
    
    If RMDIR is recorded in brick changelog which is due to
    self heal traffic then it will not have UNLINK entries for
    child files. Geo-rep hangs with ENOTEMPTY error on slave.
    
    Now geo-rep recursively deletes the dir if it gets ENOTEMPTY.
    
    BUG: 1129702
    Change-Id: Iacfe6a05d4b3a72b68c3be7fd19f10af0b38bcd1
    Signed-off-by: Aravinda VK <avishwan@redhat.com>
    Reviewed-on: http://review.gluster.org/8477
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Venky Shankar <vshankar@redhat.com>
    Tested-by: Venky Shankar <vshankar@redhat.com>

Comment 4 Niels de Vos 2015-05-14 17:26:10 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:28:09 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:35:13 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.