Bug 1514419

Summary: gluster volume splitbrain info needs to display output of each brick in a stream fashion instead of buffering and dumping at the end
Product: [Community] GlusterFS Reporter: Karthik U S <ksubrahm>
Component: replicateAssignee: Karthik U S <ksubrahm>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 3.13CC: bugs, nchilaka, ravishankar, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.13.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1506104 Environment:
Last Closed: 2017-12-08 17:46:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1506104    
Bug Blocks:    

Comment 1 Worker Ant 2017-11-17 11:16:56 UTC
REVIEW: https://review.gluster.org/18797 (cluster/afr: Print heal info split-brain output in stream fashion) posted (#1) for review on release-3.13 by Karthik U S

Comment 2 Worker Ant 2017-11-22 12:50:19 UTC
REVIEW: https://review.gluster.org/18842 (cluster/afr: Print heal info summary output in stream fashion) posted (#1) for review on release-3.13 by Karthik U S

Comment 3 Worker Ant 2017-11-27 18:13:32 UTC
COMMIT: https://review.gluster.org/18842 committed in release-3.13 by \"Karthik U S\" <ksubrahm> with a commit message- cluster/afr: Print heal info summary output in stream fashion

Problem:
The heal info summary was printing the output at the end after
crawling for pending heal entries completes on all the bricks.

Fix:
Printing the output immediately after the crawl on individual brick
completes, so that it won't give the impression of CLI being hung.

Change-Id: Ieaf5718736a7ee6837bac02bd30a95836e605dab
BUG: 1514419
Signed-off-by: karthik-us <ksubrahm>
(cherry picked from commit 77e3bc671aab2fda68ada53f38ec368b20675f59)

Comment 4 Shyamsundar 2017-12-08 17:46:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/