Bug 1314164 - glusterd: does not start
Summary: glusterd: does not start
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.7.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Atin Mukherjee
QA Contact:
URL:
Whiteboard:
Depends On: 1313901
Blocks: glusterfs-3.7.9
TreeView+ depends on / blocked
 
Reported: 2016-03-03 06:24 UTC by Atin Mukherjee
Modified: 2016-04-19 07:22 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.7.9
Doc Type: Bug Fix
Doc Text:
Clone Of: 1313901
Environment:
Last Closed: 2016-04-19 07:22:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Atin Mukherjee 2016-03-03 06:24:16 UTC
+++ This bug was initially created as a clone of Bug #1313901 +++

Description of problem:
glusterd does not start

Version-Release number of selected component (if applicable):
tested with HEAD at commit 2102010edab355ac9882eea41a46edaca8b9d02c

How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:
log shows:
[2016-03-02 11:17:01.648022] D [MSGID: 0] [glusterd-utils.c:1002:glusterd_resolve_brick] 0-management: Returning -1

Expected results:


Additional info:
Kaushal has tentatively pointed to commit a60c39de

--- Additional comment from Atin Mukherjee on 2016-03-02 23:49:30 EST ---

Please provide the reproducer steps to validate it.

--- Additional comment from Milind Changire on 2016-03-03 00:00:56 EST ---

With the latest source pull which resulted in the mentioned commit at the HEAD, I attempted a source install. Initially glusterd started but mount was failing  throwing an ERROR log via AFR pointing to "is op-version >= 30707?". Anuradha pointed me to a gluster-devel mail from Pranith discussing this issue. I was advised to set cluster.op-version to 30707 ... and I did. This did not help me get my volume mounted. So I deleted and re-created the volume and was able to successfully mount the volume while glusterd was running in the background. Later, I had to kill glusterd and attempted restarting it. However, this did not succeed ... and this is the current situation.

Kaushal has done a brief source review and has some comments which might be more helpful.

Comment 1 Vijay Bellur 2016-03-03 08:16:50 UTC
REVIEW: http://review.gluster.org/13589 (glusterd: Avoid ret value of glusterd_resolve_brick in retreive brick path) posted (#1) for review on release-3.7 by Atin Mukherjee (amukherj)

Comment 2 Vijay Bellur 2016-03-04 04:15:15 UTC
COMMIT: http://review.gluster.org/13589 committed in release-3.7 by Atin Mukherjee (amukherj) 
------
commit 81fae33cb7ce70f885ce52fa0cc71b3435333a53
Author: Atin Mukherjee <amukherj>
Date:   Thu Mar 3 12:24:49 2016 +0530

    glusterd: Avoid ret value of glusterd_resolve_brick in retreive brick path
    
    Backport of http://review.gluster.org/13588
    
    In glusterd_store_retrieve_bricks() commit a60c39d introduced
    glusterd_resolve_brick () call to resolve all the bricks which is incorrect
    since by the time peerinfo list may not be constructed. The requirement here was
    to get the local brick's uuid populated and match that with MY_UUID.
    
    Fix is to overlook the return code of glusterd_resolve_brick() to ensure that
    the failure in resolving non local bricks are genuine and expected.
    
    Change-Id: I22822ae5b4e96fe4eacd50ea5c41e58061557106
    BUG: 1314164
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/13589
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 3 Kaushal 2016-04-19 07:22:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.

glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.