Bug 1408786
Summary: | with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Krutika Dhananjay <kdhananj> |
Component: | replicate | Assignee: | Krutika Dhananjay <kdhananj> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.8 | CC: | amukherj, bugs, knarra, ksandha, nchilaka, rcyriac, rhinduja, rhs-bugs, sasundar, storage-qa-internal |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.8.8 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1408712 | Environment: | |
Last Closed: | 2017-01-16 12:27:41 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1408426, 1408712 | ||
Bug Blocks: | 1400057, 1408785 |
Description
Krutika Dhananjay
2016-12-27 08:06:57 UTC
REVIEW: http://review.gluster.org/16293 (cluster/afr: Fix missing name indices due to EEXIST error) posted (#2) for review on release-3.8 by Krutika Dhananjay (kdhananj) COMMIT: http://review.gluster.org/16293 committed in release-3.8 by Pranith Kumar Karampuri (pkarampu) ------ commit 8e2eaa6ea495e151adf1eca9cdd17f0a9f1a1bfc Author: Krutika Dhananjay <kdhananj> Date: Mon Dec 26 21:08:03 2016 +0530 cluster/afr: Fix missing name indices due to EEXIST error Backport of: http://review.gluster.org/16286 PROBLEM: Consider a volume with granular-entry-heal and sharding enabled. When a replica is down and a shard is created as part of a write, the name index is correctly created under indices/entry-changes/<dot-shard-gfid>. Now when a read on the same region triggers another MKNOD, the fop fails on the online bricks with EEXIST. By virtue of this being a symmetric error, the failed_subvols[] array is reset to all zeroes. Because of this, before post-op, the GF_XATTROP_ENTRY_OUT_KEY will be set, causing the name index, which was created in the previous MKNOD operation, to be wrongly deleted in THIS MKNOD operation. FIX: The ideal fix would have been for a transaction to delete the name index ONLY if it knows it is the one that created the index in the first place. This would involve gathering information as to whether THIS xattrop created the index from individual bricks, aggregating their responses and based on the various posisble combinations of responses, decide whether to delete the index or not. This is rather complex. Simpler fix would be for post-op to examine local->op_ret in the event of no failed_subvols to figure out whether to delete the name index or not. This can occasionally lead to creation of stale name indices but they won't be affecting the IO path or mess with pending changelogs in any way and self-heal in its crawl of "entry-changes" directory would take care to delete such indices. Change-Id: Icc642a987d1b6a5097562315aecf1263ed35ceb6 BUG: 1408786 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/16293 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.8, please open a new bug report. glusterfs-3.8.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2017-January/000064.html [2] https://www.gluster.org/pipermail/gluster-users/ |