Bug 1114003 - Dist-geo-rep : worker was not restarted by monitor, after it died, and remained in zombie state.
Summary: Dist-geo-rep : worker was not restarted by monitor, after it died, and remain...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
Assignee: Aravinda VK
QA Contact:
URL:
Whiteboard:
Depends On: 1112582
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-06-27 12:34 UTC by Aravinda VK
Modified: 2014-11-11 08:36 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Clone Of: 1112582
Environment:
Last Closed: 2014-11-11 08:36:14 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Anand Avati 2014-06-27 12:36:13 UTC
REVIEW: http://review.gluster.org/8194 (geo-rep: Fix the fd leak in worker/agent spawn) posted (#1) for review on master by Aravinda VK (avishwan)

Comment 2 Anand Avati 2014-06-30 10:40:53 UTC
COMMIT: http://review.gluster.org/8194 committed in master by Venky Shankar (vshankar) 
------
commit 6a6bd449247cfed587922cbc1b6b54a1fa0301ad
Author: Aravinda VK <avishwan>
Date:   Fri Jun 27 17:52:25 2014 +0530

    geo-rep: Fix the fd leak in worker/agent spawn
    
    worker and agent uses pipe to communicate, if worker dies
    for some reason agent should get EOF and terminate.
    
    Each worker-agent spawning is done in thread, Due to race
    if multiple workers in same node retain the pipe refs of
    other workers. Hence agent will not get EOF even if
    worker dies.
    
    BUG: 1114003
    Change-Id: I36b9709b9392299483606bd3ef1db764fa3f2bff
    Signed-off-by: Aravinda VK <avishwan>
    Reviewed-on: http://review.gluster.org/8194
    Tested-by: Justin Clift <justin>
    Reviewed-by: Venky Shankar <vshankar>
    Tested-by: Venky Shankar <vshankar>

Comment 3 Niels de Vos 2014-09-22 12:44:02 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 4 Niels de Vos 2014-11-11 08:36:14 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.