Bug 1313901 - glusterd: does not start
Summary: glusterd: does not start
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Atin Mukherjee
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1314164
TreeView+ depends on / blocked
 
Reported: 2016-03-02 15:10 UTC by Milind Changire
Modified: 2016-06-16 13:59 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of:
: 1314164 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:59:06 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Milind Changire 2016-03-02 15:10:50 UTC
Description of problem:
glusterd does not start

Version-Release number of selected component (if applicable):
tested with HEAD at commit 2102010edab355ac9882eea41a46edaca8b9d02c

How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:
log shows:
[2016-03-02 11:17:01.648022] D [MSGID: 0] [glusterd-utils.c:1002:glusterd_resolve_brick] 0-management: Returning -1

Expected results:


Additional info:
Kaushal has tentatively pointed to commit a60c39de

Comment 1 Atin Mukherjee 2016-03-03 04:49:30 UTC
Please provide the reproducer steps to validate it.

Comment 2 Milind Changire 2016-03-03 05:00:56 UTC
With the latest source pull which resulted in the mentioned commit at the HEAD, I attempted a source install. Initially glusterd started but mount was failing  throwing an ERROR log via AFR pointing to "is op-version >= 30707?". Anuradha pointed me to a gluster-devel mail from Pranith discussing this issue. I was advised to set cluster.op-version to 30707 ... and I did. This did not help me get my volume mounted. So I deleted and re-created the volume and was able to successfully mount the volume while glusterd was running in the background. Later, I had to kill glusterd and attempted restarting it. However, this did not succeed ... and this is the current situation.

Kaushal has done a brief source review and has some comments which might be more helpful.

Comment 3 Vijay Bellur 2016-03-03 06:59:41 UTC
REVIEW: http://review.gluster.org/13588 (glusterd: Fix regression introduced by commit a60c39d) posted (#1) for review on master by Atin Mukherjee (amukherj)

Comment 4 Vijay Bellur 2016-03-03 08:14:28 UTC
REVIEW: http://review.gluster.org/13588 (glusterd: Avoid ret value of glusterd_resolve_brick in retreive brick path) posted (#2) for review on master by Atin Mukherjee (amukherj)

Comment 5 Vijay Bellur 2016-03-03 11:35:44 UTC
COMMIT: http://review.gluster.org/13588 committed in master by Kaushal M (kaushal) 
------
commit 92273862decac2282b7f2a9183df3f139e5629a5
Author: Atin Mukherjee <amukherj>
Date:   Thu Mar 3 12:24:49 2016 +0530

    glusterd: Avoid ret value of glusterd_resolve_brick in retreive brick path
    
    In glusterd_store_retrieve_bricks() commit a60c39d introduced
    glusterd_resolve_brick () call to resolve all the bricks which is incorrect
    since by the time peerinfo list may not be constructed. The requirement here was
    to get the local brick's uuid populated and match that with MY_UUID.
    
    Fix is to overlook the return code of glusterd_resolve_brick() to ensure that
    the failure in resolving non local bricks are genuine and expected.
    
    Change-Id: I22822ae5b4e96fe4eacd50ea5c41e58061557106
    BUG: 1313901
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/13588
    Smoke: Gluster Build System <jenkins.com>
    Reviewed-by: Gaurav Kumar Garg <ggarg>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Kaushal M <kaushal>

Comment 6 Niels de Vos 2016-06-16 13:59:06 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.