Bug 1342178
Summary: | Directory creation(mkdir) fails when the remove brick is initiated for replicated volumes accessing via nfs-ganesha | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Pranith Kumar K <pkarampu> |
Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.8.0 | CC: | amukherj, bugs, jthottan, kkeithle, ndevos, pkarampu, rcyriac, rgowdapp, rhinduja, skoduri, sraj, storage-qa-internal |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.8.0 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1340623 | Environment: | |
Last Closed: | 2016-06-16 12:33:09 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1340085, 1340623 | ||
Bug Blocks: | 1311817, 1340992 |
Comment 1
Vijay Bellur
2016-06-02 17:08:52 UTC
REVIEW: http://review.gluster.org/14618 (cluster/afr: Unwind with xdata in inode-write fops) posted (#1) for review on release-3.8 by Pranith Kumar Karampuri (pkarampu) COMMIT: http://review.gluster.org/14617 committed in release-3.8 by Niels de Vos (ndevos) ------ commit de56d9591ed94fc6f77e6f97ea6bbfaeae8e19fd Author: Pranith Kumar K <pkarampu> Date: Fri May 27 15:47:07 2016 +0530 cluster/afr: Unwind xdata_rsp even in case of failures DHT expects GF_PREOP_CHECK_FAILED to be present in xdata_rsp in case of mkdir failures because of stale layout. But AFR was unwinding null xdata_rsp in case of failures. This was leading to mkdir failures just after remove-brick. Unwind the xdata_rsp in case of failures to make sure the response from brick reaches dht. >BUG: 1340623 >Change-Id: Idd3f7b95730e8ea987b608e892011ff190e181d1 >Signed-off-by: Pranith Kumar K <pkarampu> >Reviewed-on: http://review.gluster.org/14553 >NetBSD-regression: NetBSD Build System <jenkins.org> >Reviewed-by: Ravishankar N <ravishankar> >Smoke: Gluster Build System <jenkins.com> >CentOS-regression: Gluster Build System <jenkins.com> >Reviewed-by: Anuradha Talur <atalur> >Reviewed-by: Krutika Dhananjay <kdhananj> BUG: 1342178 Change-Id: Iaacadcad0f76979fb250bd008b8e43f0e7acf642 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/14617 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Krutika Dhananjay <kdhananj> Reviewed-by: Niels de Vos <ndevos> COMMIT: http://review.gluster.org/14618 committed in release-3.8 by Niels de Vos (ndevos) ------ commit 1cd0e86cea9a6d3e52340cfa33622bfb4b9ce4d6 Author: Pranith Kumar K <pkarampu> Date: Tue May 31 14:49:33 2016 +0530 cluster/afr: Unwind with xdata in inode-write fops When there is a failure afr was not unwinding xdata to xlators above. xdata need not be NULL on failures. So it is important to send it to parent xlators. >Change-Id: Ic36aac10a79fa91121961932dd1920cb1c2c3a4c >BUG: 1340623 >Signed-off-by: Pranith Kumar K <pkarampu> >Reviewed-on: http://review.gluster.org/14567 >Smoke: Gluster Build System <jenkins.com> >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.com> >Reviewed-by: Jeff Darcy <jdarcy> BUG: 1342178 Change-Id: Idd74d2bc898fe5aef537ab48c1754510030c8825 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/14618 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Niels de Vos <ndevos> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |