Bug 1514388

Summary: default timeout of 5min not honored for analyzing split-brain files post setfattr replica.split-brain-heal-finalize
Product: [Community] GlusterFS Reporter: Karthik U S <ksubrahm>
Component: replicateAssignee: Karthik U S <ksubrahm>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: high    
Version: 3.10CC: bugs, nchilaka, ravishankar, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.10.8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1503519 Environment:
Last Closed: 2017-12-08 16:46:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1503519    
Bug Blocks:    

Comment 1 Worker Ant 2017-11-17 09:57:38 UTC
REVIEW: https://review.gluster.org/18796 (cluster/afr: Honor default timeout of 5min for analyzing split-brain files) posted (#1) for review on release-3.10 by Karthik U S

Comment 2 Worker Ant 2017-11-27 13:50:52 UTC
COMMIT: https://review.gluster.org/18796 committed in release-3.10 by \"Karthik U S\" <ksubrahm> with a commit message- cluster/afr: Honor default timeout of 5min for analyzing split-brain files

Problem:
After setting split-brain-choice option to analyze the file to resolve
the split brain using the command
"setfattr -n replica.split-brain-choice -v "choiceX" <path-to-file>"
should allow to access the file from mount for default timeout of 5mins.
But the timeout was not honored and was able to access the file even after
the timeout.

Fix:
Call the inode_invalidate() in afr_set_split_brain_choice_cbk() so that
it will triger the cache invalidate after resetting the timer and the
split brain choice. So the next calls to access the file will fail with EIO.

Change-Id: I698cb833676b22ff3e4c6daf8b883a0958f51a64
BUG: 1514388
Signed-off-by: karthik-us <ksubrahm>
(cherry picked from commit 933ec57ccda2c1ba5ce6f207313c3b6802e67ca3)

Comment 3 Shyamsundar 2017-12-08 16:46:32 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.8, please open a new bug report.

glusterfs-3.10.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000086.html
[2] https://www.gluster.org/pipermail/gluster-users/