Bug 1641761 - Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t
Summary: Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tests
Version: 4.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1641344 1641762 1641872
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-22 16:08 UTC by Ravishankar N
Modified: 2018-11-29 15:26 UTC (History)
1 user (show)

Fixed In Version: glusterfs-4.1.6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1641344
Environment:
Last Closed: 2018-11-29 15:26:07 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Ravishankar N 2018-10-22 16:08:02 UTC
+++ This bug was initially created as a clone of Bug #1641344 +++

Problem:
    https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
    this .t spuriously. On checking one of the failure logs, I see:

    22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
    22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
    22:05:44 not ok 20 , LINENUM:38

    In glusterd log:
    [2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

    But the tests which preceed this check whether via a statedump if the shd is
    conected to the bricks, and they have succeeded and even started
    healing. From glustershd.log:

    [2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

    So the only reason I can see launching heal via cli failing is a race where
    shd has been spawned but glusterd has not yet updated in-memory that it is up,
    and hence failing the CLI.

    Fix:
    Check for shd up status before launching heal via CLI

--- Additional comment from Worker Ant on 2018-10-21 08:17:59 EDT ---

REVIEW: https://review.gluster.org/21451 (tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t) posted (#1) for review on master by Ravishankar N

--- Additional comment from Worker Ant on 2018-10-22 09:49:30 EDT ---

COMMIT: https://review.gluster.org/21451 committed in master by "Pranith Kumar Karampuri" <pkarampu@redhat.com> with a commit message- tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t

Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641344
Signed-off-by: Ravishankar N <ravishankar@redhat.com>

Comment 1 Worker Ant 2018-10-22 16:10:06 UTC
REVIEW: https://review.gluster.org/21459 (tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t) posted (#1) for review on release-4.1 by Ravishankar N

Comment 2 Worker Ant 2018-10-22 18:12:29 UTC
COMMIT: https://review.gluster.org/21459 committed in release-4.1 by "soumya k" <skoduri@redhat.com> with a commit message- tests: check for shd up status in bug-1637802-arbiter-stale-data-heal-lock.t

Problem:
https://review.gluster.org/#/c/glusterfs/+/21427/ seems to be failing
this .t spuriously. On checking one of the failure logs, I see:

22:05:44 Launching heal operation to perform index self heal on volume patchy has been unsuccessful:
22:05:44 Self-heal daemon is not running. Check self-heal daemon log file.
22:05:44 not ok 20 , LINENUM:38

In glusterd log:
[2018-10-18 22:05:44.298832] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Heal' failed on localhost : Self-heal daemon is not running. Check self-heal daemon log file

But the tests which preceed this check whether via a statedump if the shd is
conected to the bricks, and they have succeeded and even started
healing. From glustershd.log:

[2018-10-18 22:05:40.975268] I [MSGID: 108026] [afr-self-heal-common.c:1732:afr_log_selfheal] 0-patchy-replicate-0: Completed data selfheal on 3b83d2dd-4cf2-4ea3-a33e-4275be40f440. sources=[0] 1  sinks=2

So the only reason I can see launching heal via cli failing is a race where
shd has been spawned but glusterd has not yet updated in-memory that it is up,
and hence failing the CLI.

Fix:
Check for shd up status before launching heal via CLI

Change-Id: Ic88abf14ad3d51c89cb438db601fae4df179e8f4
fixes: bz#1641761
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
(cherry picked from commit 3dea105556130abd4da0fd3f8f2c523ac52398d1)

Comment 3 Shyamsundar 2018-11-29 15:26:07 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.6, please open a new bug report.

glusterfs-4.1.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-November/000116.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.