Bug 1176543 - RDMA: GFAPI benchmark segfaults when ran with greater than 2 threads, no segfaults are seen over TCP
Summary: RDMA: GFAPI benchmark segfaults when ran with greater than 2 threads, no segf...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: All
OS: Linux
urgent
high
Target Milestone: ---
Assignee: Anoop C S
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1174466
TreeView+ depends on / blocked
 
Reported: 2014-12-22 09:59 UTC by Anoop C S
Modified: 2015-05-15 17:09 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1174466
Environment:
Last Closed: 2015-05-15 17:09:18 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Anand Avati 2014-12-22 10:26:26 UTC
REVIEW: http://review.gluster.org/9322 (cluster/dht: Send a single CHILD_DOWN event from dht_notify) posted (#2) for review on master by Anoop C S (achiraya)

Comment 2 Anand Avati 2014-12-22 12:50:59 UTC
REVIEW: http://review.gluster.org/9322 (cluster/dht: Propagate an event only after hearing the same from all subvolumes) posted (#3) for review on master by Anoop C S (achiraya)

Comment 3 Anand Avati 2014-12-22 14:22:26 UTC
REVIEW: http://review.gluster.org/9322 (cluster/dht: Propagate an event only after hearing the same from all subvolumes) posted (#4) for review on master by Anoop C S (achiraya)

Comment 4 Anand Avati 2014-12-23 05:51:16 UTC
REVIEW: http://review.gluster.org/9322 (cluster/dht: Propagate an event only after hearing the same from all subvolumes) posted (#5) for review on master by Anoop C S (achiraya)

Comment 5 Anand Avati 2015-01-08 06:48:56 UTC
REVIEW: http://review.gluster.org/9322 (cluster/dht: Propagate an event only after hearing the same from all subvolumes) posted (#6) for review on master by Anoop C S (achiraya)

Comment 6 Anand Avati 2015-02-20 04:02:05 UTC
REVIEW: http://review.gluster.org/9322 (cluster/dht: Propagate an event only after hearing the same from all subvolumes) posted (#7) for review on master by Poornima G (pgurusid)

Comment 7 Anand Avati 2015-02-24 12:43:29 UTC
REVIEW: http://review.gluster.org/9322 (cluster/dht: Propagate an event only after hearing the same from all subvolumes) posted (#8) for review on master by Poornima G (pgurusid)

Comment 8 Anand Avati 2015-02-25 11:20:08 UTC
REVIEW: http://review.gluster.org/9322 (cluster/dht: Propagate an event only after hearing the same from all subvolumes) posted (#9) for review on master by Poornima G (pgurusid)

Comment 9 Anand Avati 2015-02-27 05:38:18 UTC
REVIEW: http://review.gluster.org/9322 (cluster/dht: Propagate an event only after hearing the same from all subvolumes) posted (#10) for review on master by Poornima G (pgurusid)

Comment 10 Anand Avati 2015-02-27 07:41:04 UTC
COMMIT: http://review.gluster.org/9322 committed in master by Raghavendra G (rgowdapp) 
------
commit fc214f0f90ab195b7542a18cc918db467f575b37
Author: Anoop C S <achiraya>
Date:   Sat Dec 20 12:22:02 2014 +0530

    cluster/dht: Propagate an event only after hearing the same from all subvolumes
    
    In dht_notify(), we propagate each event without checking whether
    all subvolumes have reported the same event earlier. As a result
    separate events are being forwarded for each dht-subvolume.
    
    This change is to make sure that we propagate a particular event
    only if all other subvolumes have already reported the same event
    once earlier.
    
    Change-Id: I6c73fa105e967f29648af9e2030f91a94f2df130
    BUG: 1176543
    Signed-off-by: Anoop C S <achiraya>
    Reviewed-on: http://review.gluster.org/9322
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 11 Niels de Vos 2015-05-15 17:09:18 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.