Bug 1408785
Summary: | with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Krutika Dhananjay <kdhananj> |
Component: | replicate | Assignee: | Krutika Dhananjay <kdhananj> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.9 | CC: | amukherj, bugs, knarra, ksandha, nchilaka, rcyriac, rhinduja, rhs-bugs, sasundar, storage-qa-internal |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.9.1 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1408712 | Environment: | |
Last Closed: | 2017-03-08 10:23:37 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1408426, 1408712, 1408786 | ||
Bug Blocks: | 1400057 |
Description
Krutika Dhananjay
2016-12-27 08:04:45 UTC
REVIEW: http://review.gluster.org/16293 (cluster/afr: Fix missing name indices due to EEXIST error) posted (#1) for review on release-3.8 by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/16294 (cluster/afr: Fix missing name indices due to EEXIST error) posted (#1) for review on release-3.9 by Krutika Dhananjay (kdhananj) COMMIT: http://review.gluster.org/16294 committed in release-3.9 by Pranith Kumar Karampuri (pkarampu) ------ commit 544f6ce9e7a249360166e98dd7df1b09f91717a9 Author: Krutika Dhananjay <kdhananj> Date: Mon Dec 26 21:08:03 2016 +0530 cluster/afr: Fix missing name indices due to EEXIST error Backport of: http://review.gluster.org/16286 PROBLEM: Consider a volume with granular-entry-heal and sharding enabled. When a replica is down and a shard is created as part of a write, the name index is correctly created under indices/entry-changes/<dot-shard-gfid>. Now when a read on the same region triggers another MKNOD, the fop fails on the online bricks with EEXIST. By virtue of this being a symmetric error, the failed_subvols[] array is reset to all zeroes. Because of this, before post-op, the GF_XATTROP_ENTRY_OUT_KEY will be set, causing the name index, which was created in the previous MKNOD operation, to be wrongly deleted in THIS MKNOD operation. FIX: The ideal fix would have been for a transaction to delete the name index ONLY if it knows it is the one that created the index in the first place. This would involve gathering information as to whether THIS xattrop created the index from individual bricks, aggregating their responses and based on the various posisble combinations of responses, decide whether to delete the index or not. This is rather complex. Simpler fix would be for post-op to examine local->op_ret in the event of no failed_subvols to figure out whether to delete the name index or not. This can occasionally lead to creation of stale name indices but they won't be affecting the IO path or mess with pending changelogs in any way and self-heal in its crawl of "entry-changes" directory would take care to delete such indices. Change-Id: I8c5c08b7a208e840b5970fe5699dabdaf751a150 BUG: 1408785 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/16294 Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.1, please open a new bug report. glusterfs-3.9.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-January/029725.html [2] https://www.gluster.org/pipermail/gluster-users/ |