Bug 1559079

Summary: test ./tests/bugs/ec/bug-1236065.t is generating crash on build
Product: [Community] GlusterFS Reporter: Xavi Hernandez <jahernan>
Component: disperseAssignee: Xavi Hernandez <jahernan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.0CC: bugs, jahernan, moagrawa
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-4.0.2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1558016 Environment:
Last Closed: 2018-05-07 15:15:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1558016    
Bug Blocks:    

Comment 1 Worker Ant 2018-03-21 16:33:30 UTC
REVIEW: https://review.gluster.org/19756 (cluster/ec: fix SHD crash for null gfid's) posted (#1) for review on release-4.0 by Xavi Hernandez

Comment 2 Worker Ant 2018-03-22 18:44:09 UTC
COMMIT: https://review.gluster.org/19756 committed in release-4.0 by "Xavi Hernandez" <xhernandez> with a commit message- cluster/ec: fix SHD crash for null gfid's

When the self-heal daemon is doing a full sweep it uses readdirp to
get extra stat information from each file. This information is
obtained in two steps by the posix xlator: first the directory is
read to get the entries and then each entry is stated to get additional
info. Between these two steps, it's possible that the file is removed
by the user, so we'll get an error, leaving stat info empty.

EC's heal daemon was using the gfid blindly, causing an assert failure
when protocol/client was trying to encode the gfid.

To fix the problem a check has been added. If we detect a null gfid, we
simply ignore it and continue healing.

Backport of:
> BUG: 1558016

Change-Id: I2e4acdcecd0b6951055e50d1c37d686a2186a228
BUG: 1559079
Signed-off-by: Xavi Hernandez <xhernandez>

Comment 3 Shyamsundar 2018-05-07 15:15:28 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.2, please open a new bug report.

glusterfs-4.0.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-April/000097.html
[2] https://www.gluster.org/pipermail/gluster-users/