Bug 1515163 - centos regression fails for tests/bugs/replicate/bug-1292379.t
Summary: centos regression fails for tests/bugs/replicate/bug-1292379.t
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1581219
TreeView+ depends on / blocked
 
Reported: 2017-11-20 09:55 UTC by Milind Changire
Modified: 2018-05-22 11:38 UTC (History)
2 users (show)

Fixed In Version: glusterfs-4.0.0
Clone Of:
: 1581219 (view as bug list)
Environment:
Last Closed: 2018-03-15 11:21:35 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Milind Changire 2017-11-20 09:55:35 UTC
Description of problem:
regression failure observed at https://build.gluster.org/job/centos6-regression/7534/console

Comment 1 Worker Ant 2018-01-12 04:55:07 UTC
REVIEW: https://review.gluster.org/19185 (tests: check volume status for shd being up) posted (#1) for review on master by Ravishankar N

Comment 2 Worker Ant 2018-01-12 05:56:47 UTC
COMMIT: https://review.gluster.org/19185 committed in master by \"Ravishankar N\" <ravishankar> with a commit message- tests: check volume status for shd being up

so that glusterd is also aware that shd is up and running.

While not reproducible locally, on the jenkins slaves, 'gluster vol heal patchy'
fails with "Self-heal daemon is not running. Check self-heal daemon log file.",
while infact the afr_child_up_status_in_shd() checks before that passed. In the
shd log also, I see the shd being up and connected to at least one brick before
the heal is launched.

Change-Id: Id3801fa4ab56a70b1f0bd6a7e240f69bea74a5fc
BUG: 1515163
Signed-off-by: Ravishankar N <ravishankar>

Comment 3 Shyamsundar 2018-03-15 11:21:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.