Problem: If a brick crashes after an entry (file or dir) is created but before gfid is assigned, the good bricks will have pending entry heal xattrs but the heal won't complete because afr_selfheal_recreate_entry() tries to create the entry again and it fails with EEXIST.
REVIEW: https://review.gluster.org/18326 (afr: heal gfid as a part of entry heal) posted (#1) for review on master by Ravishankar N (ravishankar)
REVIEW: https://review.gluster.org/18326 (afr: heal gfid as a part of entry heal) posted (#2) for review on master by Ravishankar N (ravishankar)
REVIEW: https://review.gluster.org/18326 (afr: heal gfid as a part of entry heal) posted (#3) for review on master by Ravishankar N (ravishankar)
REVIEW: https://review.gluster.org/18326 (afr: heal gfid as a part of entry heal) posted (#4) for review on master by Ravishankar N (ravishankar)
REVIEW: https://review.gluster.org/18326 (afr: heal gfid as a part of entry heal) posted (#5) for review on master by Ravishankar N (ravishankar)
REVIEW: https://review.gluster.org/18326 (afr: heal gfid as a part of entry heal) posted (#6) for review on master by Ravishankar N (ravishankar)
COMMIT: https://review.gluster.org/18326 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 20fa80057eb430fd72b4fa31b9b65598b8ec1265 Author: Ravishankar N <ravishankar> Date: Wed Sep 20 12:16:06 2017 +0530 afr: heal gfid as a part of entry heal Problem: If a brick crashes after an entry (file or dir) is created but before gfid is assigned, the good bricks will have pending entry heal xattrs but the heal won't complete because afr_selfheal_recreate_entry() tries to create the entry again and it fails with EEXIST. Fix: We could have fixed posx_mknod/mkdir etc to assign the gfid if the file already exists but the right thing to do seems to be to trigger a lookup on the bad brick and let it heal the gfid instead of winding an mknod/mkdir in the first place. Change-Id: I82f76665a7541f1893ef8d847b78af6466aff1ff BUG: 1493415 Signed-off-by: Ravishankar N <ravishankar>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report. glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html [2] https://www.gluster.org/pipermail/gluster-users/