Bug 1408712
Summary: | with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Krutika Dhananjay <kdhananj> | |
Component: | replicate | Assignee: | Krutika Dhananjay <kdhananj> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | amukherj, bugs, knarra, ksandha, nchilaka, rcyriac, rhinduja, rhs-bugs, sasundar, storage-qa-internal | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.10.0 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | 1408426 | |||
: | 1408785 1408786 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-06 17:40:49 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1408426 | |||
Bug Blocks: | 1400057, 1408785, 1408786 |
Description
Krutika Dhananjay
2016-12-26 14:57:40 UTC
REVIEW: http://review.gluster.org/16286 (cluster/afr: Fix missing name indices due to EEXIST error) posted (#1) for review on master by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/16286 (cluster/afr: Fix missing name indices due to EEXIST error) posted (#2) for review on master by Krutika Dhananjay (kdhananj) COMMIT: http://review.gluster.org/16286 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit da5ece887c218a7c572a1c25925a178dbd08d464 Author: Krutika Dhananjay <kdhananj> Date: Mon Dec 26 21:08:03 2016 +0530 cluster/afr: Fix missing name indices due to EEXIST error PROBLEM: Consider a volume with granular-entry-heal and sharding enabled. When a replica is down and a shard is created as part of a write, the name index is correctly created under indices/entry-changes/<dot-shard-gfid>. Now when a read on the same region triggers another MKNOD, the fop fails on the online bricks with EEXIST. By virtue of this being a symmetric error, the failed_subvols[] array is reset to all zeroes. Because of this, before post-op, the GF_XATTROP_ENTRY_OUT_KEY will be set, causing the name index, which was created in the previous MKNOD operation, to be wrongly deleted in THIS MKNOD operation. FIX: The ideal fix would have been for a transaction to delete the name index ONLY if it knows it is the one that created the index in the first place. This would involve gathering information as to whether THIS xattrop created the index from individual bricks, aggregating their responses and based on the various posisble combinations of responses, decide whether to delete the index or not. This is rather complex. Simpler fix would be for post-op to examine local->op_ret in the event of no failed_subvols to figure out whether to delete the name index or not. This can occasionally lead to creation of stale name indices but they won't be affecting the IO path or mess with pending changelogs in any way and self-heal in its crawl of "entry-changes" directory would take care to delete such indices. Change-Id: Ic1b5257f4dc9c20cb740a866b9598cf785a1affa BUG: 1408712 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/16286 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/ |