Bug 1641344 - Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t
Summary: Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tests
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1641761 1641762 1641872
TreeView+ depends on / blocked
 
Reported: 2018-10-21 12:15 UTC by Ravishankar N
Modified: 2019-03-25 16:31 UTC (History)
1 user (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1641761 1641762 1641872 (view as bug list)
Environment:
Last Closed: 2019-03-25 16:31:24 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2018-10-21 12:15:11 UTC
Problem:
    https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
    this .t spuriously. On checking one of the failure logs, I see:

    22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
    22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
    22:05:44 not ok 20 , LINENUM:38

    In glusterd log:
    [2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

    But the tests which preceed this check whether via a statedump if the shd is
    conected to the bricks, and they have succeeded and even started
    healing. From glustershd.log:

    [2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

    So the only reason I can see launching heal via cli failing is a race where
    shd has been spawned but glusterd has not yet updated in-memory that it is up,
    and hence failing the CLI.

    Fix:
    Check for shd up status before launching heal via CLI

Comment 1 Worker Ant 2018-10-21 12:17:59 UTC
REVIEW: https://review.gluster.org/21451 (tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t) posted (#1) for review on master by Ravishankar N

Comment 2 Worker Ant 2018-10-22 13:49:30 UTC
COMMIT: https://review.gluster.org/21451 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t

Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641344
Signed-off-by: Ravishankar N <ravishankar>

Comment 3 Shyamsundar 2019-03-25 16:31:24 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.