Bug 1233036 - [AFR-V2] - Fix shd coredump from tests/bugs/glusterd/bug-948686.t
Summary: [AFR-V2] - Fix shd coredump from tests/bugs/glusterd/bug-948686.t
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.6.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Krutika Dhananjay
QA Contact:
URL:
Whiteboard:
Depends On: 1229172 1233144
Blocks: glusterfs-3.6.4 1229550
TreeView+ depends on / blocked
 
Reported: 2015-06-18 06:29 UTC by Krutika Dhananjay
Modified: 2016-02-04 15:27 UTC (History)
2 users (show)

Fixed In Version: glusterfs-v3.6.4
Doc Type: Bug Fix
Doc Text:
Clone Of: 1229172
Environment:
Last Closed: 2016-02-04 15:27:21 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Krutika Dhananjay 2015-06-18 06:29:40 UTC
+++ This bug was initially created as a clone of Bug #1229172 +++

Description of problem:
http://www.gluster.org/pipermail/gluster-devel/2015-June/045499.html


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Krutika Dhananjay on 2015-06-08 04:47:32 EDT ---

http://review.gluster.org/#/c/11119/

--- Additional comment from Anand Avati on 2015-06-08 23:24:33 EDT ---

COMMIT: http://review.gluster.org/11119 committed in master by Vijay Bellur (vbellur@redhat.com) 
------
commit 7ca78f7a6466a0f2ff19caff526f6560b5275f69
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Mon Jun 8 11:36:12 2015 +0530

    cluster/afr: Do not attempt entry self-heal if the last lookup on entry failed on src
    
    Test bug-948686.t was causing shd to dump core due to gfid being NULL.
    This was due to the volume being stopped while index heal's in progress,
    causing afr_selfheal_unlocked_lookup_on() to fail sometimes on the src brick
    with ENOTCONN. And when afr_selfheal_newentry_mark() copies the gfid off the
    src iatt, it essentially copies null gfid. This was causing the assertion
    as part of xattrop in protocol/client to fail.
    
    Change-Id: I237a0d6b1849e4c48d7645a2cc16d9bc1441ef95
    BUG: 1229172
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/11119
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>

Comment 1 Anand Avati 2015-06-18 07:03:39 UTC
REVIEW: http://review.gluster.org/11309 (cluster/afr: Do not attempt entry self-heal if the last lookup on entry failed on src) posted (#1) for review on release-3.6 by Krutika Dhananjay (kdhananj@redhat.com)

Comment 2 Anand Avati 2015-06-19 06:38:13 UTC
COMMIT: http://review.gluster.org/11309 committed in release-3.6 by Raghavendra Bhat (raghavendra@redhat.com) 
------
commit d86a238f29c1519bad37bd38d12227bd69d1947f
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Mon Jun 8 11:36:12 2015 +0530

    cluster/afr: Do not attempt entry self-heal if the last lookup on entry failed on src
    
            Backport of: http://review.gluster.org/11119
    
    Test bug-948686.t was causing shd to dump core due to gfid being NULL.
    This was due to the volume being stopped while index heal's in progress,
    causing afr_selfheal_unlocked_lookup_on() to fail sometimes on the src brick
    with ENOTCONN. And when afr_selfheal_newentry_mark() copies the gfid off the
    src iatt, it essentially copies null gfid. This was causing the assertion
    as part of xattrop in protocol/client to fail.
    
    Change-Id: I81723567af824ce4a9fa37e309eeeab8404ac71e
    BUG: 1233036
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/11309
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>

Comment 3 Kaushal 2016-02-04 15:27:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v3.6.4, please open a new bug report.

glusterfs-v3.6.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2015-July/022826.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.