Bug 1282322 - [GlusterD]: Volume start fails post add-brick on a volume which is not started
Summary: [GlusterD]: Volume start fails post add-brick on a volume which is not started
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Mohammed Rafi KC
QA Contact:
URL:
Whiteboard:
Depends On: 1279319 1279351
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-16 05:48 UTC by Mohammed Rafi KC
Modified: 2016-06-16 13:44 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1279351
Environment:
Last Closed: 2016-06-16 13:44:23 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2015-11-16 05:49:06 UTC
REVIEW: http://review.gluster.org/12552 (glusterd: brick failed to start) posted (#3) for review on master by mohammed rafi  kc (rkavunga)

Comment 2 Vijay Bellur 2015-11-16 09:20:01 UTC
COMMIT: http://review.gluster.org/12552 committed in master by Atin Mukherjee (amukherj) 
------
commit 571cbcf56ef865d64ebdb1621c791fe467501e52
Author: Mohammed Rafi KC <rkavunga>
Date:   Mon Nov 9 16:43:21 2015 +0530

    glusterd: brick failed to start
    
    brick volfiles are generated in post validate, if
    it is running version higher than GLUSTER_3_7_5,
    else will be running in syncop.
    
    If the code fall back to syncop, and volume is stopped
    then we were returning the operation with out generating
    volfiles.
    
    Change-Id: I3b16ee29de19c5d34e45d77d6b7e4b665c2a4653
    BUG: 1282322
    Signed-off-by: Mohammed Rafi KC <rkavunga>
    Reviewed-on: http://review.gluster.org/12552
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 3 Mike McCune 2016-03-28 22:22:31 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 4 Niels de Vos 2016-06-16 13:44:23 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.