Bug 1586020 - [GSS] Pending heals are not getting completed in CNS environment
Summary: [GSS] Pending heals are not getting completed in CNS environment
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: x86_64
OS: Linux
Target Milestone: ---
Assignee: Karthik U S
QA Contact:
Depends On:
Blocks: 1566336
TreeView+ depends on / blocked
Reported: 2018-06-05 10:43 UTC by Karthik U S
Modified: 2018-10-23 15:10 UTC (History)
15 users (show)

Fixed In Version: glusterfs-5.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1566336
Last Closed: 2018-10-23 15:10:44 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Comment 1 Karthik U S 2018-06-05 10:55:48 UTC

If an entry creation operation fails on quorum number of bricks, the file will get listed in the heal info output, but will not get healed by SHD.
This can happen because the file gets created on only one brick and on that brick the file will have the pending markers set. On the parent there won't be any pending marker set so the shd fails to heal the entry.

Comment 2 Worker Ant 2018-06-05 11:00:24 UTC
REVIEW: https://review.gluster.org/20153 (cluster/afr: Mark dirty for entry transactions for quorum failures) posted (#1) for review on master by Karthik U S

Comment 3 Worker Ant 2018-07-17 04:12:18 UTC
COMMIT: https://review.gluster.org/20153 committed in master by "Karthik U S" <ksubrahm@redhat.com> with a commit message- cluster/afr: Mark dirty for entry transactions for quorum failures

If an entry creation transaction fails on quprum number of bricks
it might end up setting the pending changelogs on the file itself
on the brick where it got created. But the parent does not have
any entry pending marker set. This will lead to the entry not
getting healed by the self heal daemon automatically.

For entry transactions mark dirty on the parent if it fails on
quorum number of bricks, so that the heal can do conservative
merge and entry gets healed by shd.

Change-Id: I56448932dd409b3ddb095e2ae32e037b6157a607
fixes: bz#1586020
Signed-off-by: karthik-us <ksubrahm@redhat.com>

Comment 4 Shyamsundar 2018-10-23 15:10:44 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.