Bug 1559079 - test ./tests/bugs/ec/bug-1236065.t is generating crash on build
Summary: test ./tests/bugs/ec/bug-1236065.t is generating crash on build
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
URL:
Whiteboard:
Depends On: 1558016
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-21 16:30 UTC by Xavi Hernandez
Modified: 2018-05-07 15:15 UTC (History)
3 users (show)

Fixed In Version: glusterfs-4.0.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1558016
Environment:
Last Closed: 2018-05-07 15:15:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Worker Ant 2018-03-21 16:33:30 UTC
REVIEW: https://review.gluster.org/19756 (cluster/ec: fix SHD crash for null gfid's) posted (#1) for review on release-4.0 by Xavi Hernandez

Comment 2 Worker Ant 2018-03-22 18:44:09 UTC
COMMIT: https://review.gluster.org/19756 committed in release-4.0 by "Xavi Hernandez" <xhernandez> with a commit message- cluster/ec: fix SHD crash for null gfid's

When the self-heal daemon is doing a full sweep it uses readdirp to
get extra stat information from each file. This information is
obtained in two steps by the posix xlator: first the directory is
read to get the entries and then each entry is stated to get additional
info. Between these two steps, it's possible that the file is removed
by the user, so we'll get an error, leaving stat info empty.

EC's heal daemon was using the gfid blindly, causing an assert failure
when protocol/client was trying to encode the gfid.

To fix the problem a check has been added. If we detect a null gfid, we
simply ignore it and continue healing.

Backport of:
> BUG: 1558016

Change-Id: I2e4acdcecd0b6951055e50d1c37d686a2186a228
BUG: 1559079
Signed-off-by: Xavi Hernandez <xhernandez>

Comment 3 Shyamsundar 2018-05-07 15:15:28 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.2, please open a new bug report.

glusterfs-4.0.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-April/000097.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.