Bug 1362070

Summary: [GSS] Rebalance crashed
Product: [Community] GlusterFS Reporter: Susant Kumar Palai <spalai>
Component: distributeAssignee: Susant Kumar Palai <spalai>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 3.7.13CC: bugs, nbalacha, olim, rhs-bugs, spalai, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.15 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1359711 Environment:
Last Closed: 2016-09-01 09:20:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1352805, 1359711    
Bug Blocks:    

Comment 1 Vijay Bellur 2016-08-01 09:47:35 UTC
REVIEW: http://review.gluster.org/15062 (dht/rebalance: allocate migrator thread pool dynamically) posted (#1) for review on release-3.7 by Susant Palai (spalai)

Comment 2 Vijay Bellur 2016-08-04 07:02:18 UTC
REVIEW: http://review.gluster.org/15062 (dht/rebalance: allocate migrator thread pool dynamically) posted (#2) for review on release-3.7 by Susant Palai (spalai)

Comment 3 Vijay Bellur 2016-08-05 09:19:52 UTC
REVIEW: http://review.gluster.org/15062 (dht/rebalance: allocate migrator thread pool dynamically) posted (#3) for review on release-3.7 by Susant Palai (spalai)

Comment 4 Vijay Bellur 2016-08-05 10:55:43 UTC
COMMIT: http://review.gluster.org/15062 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit 59378892641a10ba0268a044310264e52afe8ea0
Author: Susant Palai <spalai>
Date:   Thu Aug 4 12:31:24 2016 +0530

    dht/rebalance: allocate migrator thread pool dynamically
    
    Problems: The maximum number of migratior threads created was static set
    to "40". And the number of these threads get created in rebalance depends
    on the number of cores user has. If the number of cores exceeds 40, a
    crash or memory corruption can be seen.
    
    Fix: Make the migratior thread pool dynamic.
    
    > Change-Id: Ifbdac8a1a396363dd75e2f6bcb454070cfdbf839
    > BUG: 1362070
    > Reviewed-on: http://review.gluster.org/15000
    > Smoke: Gluster Build System <jenkins.org>
    > NetBSD-regression: NetBSD Build System <jenkins.org>
    > CentOS-regression: Gluster Build System <jenkins.org>
    > Reviewed-by: Raghavendra G <rgowdapp>
    (cherry picked from commit b8e8bfc7e4d3eaf76bb637221bc6392ec10ca54b)
    
    Change-Id: Ifbdac8a1a396363dd75e2f6bcb454070cfdbf839
    BUG: 1362070
    Signed-off-by: Susant Palai <spalai>
    Reviewed-on: http://review.gluster.org/15062
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 5 Kaushal 2016-09-01 09:20:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.15, please open a new bug report.

glusterfs-3.7.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-September/050714.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 6 Kaushal 2016-09-01 09:33:03 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.15, please open a new bug report.

glusterfs-3.7.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-September/050714.html
[2] https://www.gluster.org/pipermail/gluster-users/