+++ This bug was initially created as a clone of Bug #1256243 +++ Description of problem: Because of stale-layout issue, a mknod op may fall on decommissioned brick even after the parent layout is fixed. Version-Release number of selected component (if applicable): How reproducible: Quite frequent with nfs mount Steps to Reproduce: 1. create distribute volume with more than one brick 2. remove one of the brick 3. post remove-brick do large number of mknod call. Actual results: Some files are left on the removed-brick post remove-brick commit. --- Additional comment from Anand Avati on 2015-08-25 13:53:51 MVT --- COMMIT: http://review.gluster.org/11998 committed in master by Raghavendra G (rgowdapp) ------ commit 90c7c30c3aa9417793ae972b2b9051bc5200e7e4 Author: Susant Palai <spalai> Date: Mon Aug 24 03:04:41 2015 -0400 cluster/dht: avoid mknod on decommissioned brick Change-Id: I8c39ce38e257758e27e11ccaaff4798138203e0c BUG: 1256243 Signed-off-by: Susant Palai <spalai> Reviewed-on: http://review.gluster.org/11998 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra G <rgowdapp>
REVIEW: http://review.gluster.org/12024 (cluster/dht: avoid mknod on decommissioned brick) posted (#1) for review on release-3.7 by Susant Palai (spalai)
COMMIT: http://review.gluster.org/12024 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit 315edb7868fd3a726c9c99a2ce710f8421440a65 Author: Susant Palai <spalai> Date: Mon Aug 24 03:04:41 2015 -0400 cluster/dht: avoid mknod on decommissioned brick BUG: 1256702 Change-Id: I0795720cb77a9c77e608f34fbb69574fd2acb542 Signed-off-by: Susant Palai <spalai> Reviewed-on: http://review.gluster.org/11998 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra G <rgowdapp> Signed-off-by: Susant Palai <spalai> Reviewed-on: http://review.gluster.org/12024
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.4, please open a new bug report. glusterfs-3.7.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12496 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user