+++ This bug was initially created as a clone of Bug #1129218 +++
Description of problem: While rm -rf on master mount-point, the shutting down, wait and bringing up active node results few files not getting removed from slave.
After the active nodes reboots, some of the gsyncd's get stuck trying to delete a directory which is not empty. The files which were supposed to be deleted from that directory had no entry in the changelog of the active node which was rebooted. For some reason we are not capturing the deletes happened in self-heal in changelog.
Version-Release number of selected component (if applicable): glusterfs-188.8.131.52-1
How reproducible: Happens most of the times.
Steps to Reproduce:
1. create a geo-rep relationship between master and slave.
2. create some data on master and let it sync to slave
3. then start removing the files created using rm -rf.
4. parallely shutdown one of the active nodes.
5. After some time bring back the active node/
4. check if it has completely removed from the slave.
Actual results: While rm -rf on master mount-point, the shutting down, wait and bringing up active node results few files not getting removed from slave.
Expected results: Even if the nodes were rebooted, it should be able to remove files from the slave.
REVIEW: http://review.gluster.org/8477 (geo-rep: Handle RMDIR recursively) posted (#2) for review on master by Aravinda VK (email@example.com)
REVIEW: http://review.gluster.org/8477 (geo-rep: Handle RMDIR recursively) posted (#3) for review on master by Aravinda VK (firstname.lastname@example.org)
COMMIT: http://review.gluster.org/8477 committed in master by Venky Shankar (email@example.com)
Author: Aravinda VK <firstname.lastname@example.org>
Date: Tue Aug 12 18:19:30 2014 +0530
geo-rep: Handle RMDIR recursively
If RMDIR is recorded in brick changelog which is due to
self heal traffic then it will not have UNLINK entries for
child files. Geo-rep hangs with ENOTEMPTY error on slave.
Now geo-rep recursively deletes the dir if it gets ENOTEMPTY.
Signed-off-by: Aravinda VK <email@example.com>
Tested-by: Gluster Build System <firstname.lastname@example.org>
Reviewed-by: Venky Shankar <email@example.com>
Tested-by: Venky Shankar <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.
glusterfs-3.7.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.