DHT expects GF_PREOP_CHECK_FAILED to be present in xdata_rsp in case of mkdir failures because of stale layout. But AFR was unwinding null xdata_rsp in case of failures. This was leading to mkdir failures just after remove-brick. Unwind the xdata_rsp in case of failures to make sure the response from brick reaches dht.
REVIEW: http://review.gluster.org/14553 (cluster/afr: Unwind xdata_rsp even in case of failures) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/14553 (cluster/afr: Unwind xdata_rsp even in case of failures) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/14553 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 3d75e32d6ada03c979077681ff414d948800f07e Author: Pranith Kumar K <pkarampu> Date: Fri May 27 15:47:07 2016 +0530 cluster/afr: Unwind xdata_rsp even in case of failures DHT expects GF_PREOP_CHECK_FAILED to be present in xdata_rsp in case of mkdir failures because of stale layout. But AFR was unwinding null xdata_rsp in case of failures. This was leading to mkdir failures just after remove-brick. Unwind the xdata_rsp in case of failures to make sure the response from brick reaches dht. BUG: 1340623 Change-Id: Idd3f7b95730e8ea987b608e892011ff190e181d1 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/14553 NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Ravishankar N <ravishankar> Smoke: Gluster Build System <jenkins.com> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Anuradha Talur <atalur> Reviewed-by: Krutika Dhananjay <kdhananj>
REVIEW: http://review.gluster.org/14561 (cluster/afr adding test case for http://review.gluster.org/#/c/14553/) posted (#1) for review on master by jiffin tony Thottan (jthottan)
REVIEW: http://review.gluster.org/14567 (cluster/afr: Unwind with xdata in inode-write fops) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/14561 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 1126ebcf667771267a47ea9749ed5f30a76d0d60 Author: Jiffin Tony Thottan <jthottan> Date: Tue May 31 12:29:10 2016 +0530 cluster/afr adding test case for http://review.gluster.org/#/c/14553/ Change-Id: I23865343021ae65a36f6abc74d6bd594efd9dc7e BUG: 1340623 Signed-off-by: Jiffin Tony Thottan <jthottan> Reviewed-on: http://review.gluster.org/14561 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Pranith Kumar Karampuri <pkarampu> Reviewed-by: Anuradha Talur <atalur> Reviewed-by: Ravishankar N <ravishankar> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com>
COMMIT: http://review.gluster.org/14567 committed in master by Jeff Darcy (jdarcy) ------ commit 46c0b791d528bebf1168972a34f7483bfe683ba3 Author: Pranith Kumar K <pkarampu> Date: Tue May 31 14:49:33 2016 +0530 cluster/afr: Unwind with xdata in inode-write fops When there is a failure afr was not unwinding xdata to xlators above. xdata need not be NULL on failures. So it is important to send it to parent xlators. Change-Id: Ic36aac10a79fa91121961932dd1920cb1c2c3a4c BUG: 1340623 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/14567 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Jeff Darcy <jdarcy>
Moving to modified as all patches sent on the bug seem to have been merged.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report. glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html [2] https://www.gluster.org/pipermail/gluster-users/