Bug 1215518 - Glusterd crashed after updating to 3.8 nightly build
Summary: Glusterd crashed after updating to 3.8 nightly build
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Atin Mukherjee
QA Contact:
URL:
Whiteboard:
Depends On: 1213295
Blocks: qe_tracker_everglades glusterfs-3.7.0
TreeView+ depends on / blocked
 
Reported: 2015-04-27 05:01 UTC by Atin Mukherjee
Modified: 2015-05-14 17:35 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1213295
Environment:
Last Closed: 2015-05-14 17:27:26 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Atin Mukherjee 2015-04-27 05:01:27 UTC
+++ This bug was initially created as a clone of Bug #1213295 +++

Description of problem:
=======================

Glusterd crashed after updating the nightly build. Here are the steps that are done.

1. Packages are downloaded from http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.8dev-0.12.gitaa87c31.autobuild/
2. On a 4 node cluster, installed the rpms using yum install glusterfs*
3. One of the node started showing problems. It didn't list the volume when gluster volume status <volname> is given and asked to check the service.
4. Checked with service glusterd status, it showed glusterd dead but pid exists.
5. Tried to restart the glusterd service and stop the volume from another node and it crashed.

Version-Release number of selected component (if applicable):
==============================================================

[root@vertigo ~]# gluster --version
glusterfs 3.8dev built on Apr 19 2015 01:13:06
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

How reproducible:
=================
Tried once

Steps to Reproduce:
Same as in description.

Actual results:
===============
Glusterd crashed

Expected results:
=================
No crash should be seen

Additional info:
================
Attaching the corefile.

--- Additional comment from Bhaskarakiran on 2015-04-20 05:45:29 EDT ---

Steps which i did and seen crash immediately after the gluster v start.

[root@interstellar /]# gluster v status testvol
Volume testvol is not started
[root@interstellar /]# service glusterd status
glusterd (pid  4474) is running...
[root@interstellar /]# gluster v stop testvol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: testvol: failed: Volume testvol is not in the started state
[root@interstellar /]# 
[root@interstellar /]# 
[root@interstellar /]# gluster v start testvol
Connection failed. Please check if gluster daemon is operational.
[root@interstellar /]# service glusterd status
glusterd dead but pid file exists
[root@interstellar /]#

--- Additional comment from Atin Mukherjee on 2015-04-20 11:25:53 EDT ---

http://review.gluster.org/#/c/10304/ is posted for review

--- Additional comment from Anand Avati on 2015-04-21 00:06:32 EDT ---

REVIEW: http://review.gluster.org/10304 (glusterd: initialize snapd svc at volume restore path) posted (#2) for review on master by Atin Mukherjee (amukherj)

--- Additional comment from Anand Avati on 2015-04-21 00:52:25 EDT ---

REVIEW: http://review.gluster.org/10304 (glusterd: initialize snapd svc at volume restore path) posted (#3) for review on master by Atin Mukherjee (amukherj)

--- Additional comment from Anand Avati on 2015-04-24 01:48:47 EDT ---

REVIEW: http://review.gluster.org/10304 (glusterd: initialize snapd svc at volume restore path) posted (#4) for review on master by Atin Mukherjee (amukherj)

--- Additional comment from Kaushal on 2015-04-24 05:08:45 EDT ---



--- Additional comment from Anand Avati on 2015-04-27 00:55:03 EDT ---

COMMIT: http://review.gluster.org/10304 committed in master by Kaushal M (kaushal) 
------
commit 18fd2fdd60839d737ab0ac64f33a444b54bdeee4
Author: Atin Mukherjee <amukherj>
Date:   Mon Apr 20 17:37:21 2015 +0530

    glusterd: initialize snapd svc at volume restore path
    
    In restore path snapd svc was not initialized because of which any glusterd
    instance which went down and came back may have uninitialized snapd svc. The
    reason I used 'may' is because depending on the nodes in the cluster. In a
    single node cluster this wouldn't be a problem since glusterd_spawn_daemon takes
    care of initializing it.
    
    Change-Id: I2da1e419a0506d3b2742c1cf39a3b9416eb3c305
    BUG: 1213295
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/10304
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System
    Reviewed-by: Kaushal M <kaushal>

Comment 1 Anand Avati 2015-04-27 05:02:10 UTC
REVIEW: http://review.gluster.org/10397 (glusterd: initialize snapd svc at volume restore path) posted (#1) for review on release-3.7 by Atin Mukherjee (amukherj)

Comment 2 Anand Avati 2015-04-27 08:26:19 UTC
REVIEW: http://review.gluster.org/10397 (glusterd: initialize snapd svc at volume restore path) posted (#2) for review on release-3.7 by Atin Mukherjee (amukherj)

Comment 3 Anand Avati 2015-04-28 08:53:42 UTC
COMMIT: http://review.gluster.org/10397 committed in release-3.7 by Krishnan Parthasarathi (kparthas) 
------
commit 018a0a5b846ed903d5d2545c2c353281e1e9949d
Author: Atin Mukherjee <amukherj>
Date:   Mon Apr 20 17:37:21 2015 +0530

    glusterd: initialize snapd svc at volume restore path
    
    In restore path snapd svc was not initialized because of which any glusterd
    instance which went down and came back may have uninitialized snapd svc. The
    reason I used 'may' is because depending on the nodes in the cluster. In a
    single node cluster this wouldn't be a problem since glusterd_spawn_daemon takes
    care of initializing it.
    
    Backport of http://review.gluster.org/10304
    
    Change-Id: I2da1e419a0506d3b2742c1cf39a3b9416eb3c305
    BUG: 1215518
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/10304
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System
    Reviewed-by: Kaushal M <kaushal>
    (cherry picked from commit 18fd2fdd60839d737ab0ac64f33a444b54bdeee4)
    Reviewed-on: http://review.gluster.org/10397
    Reviewed-by: Krishnan Parthasarathi <kparthas>

Comment 4 Niels de Vos 2015-05-14 17:27:26 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:28:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:35:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.