Bug 1213295 - Glusterd crashed after updating to 3.8 nightly build
Summary: Glusterd crashed after updating to 3.8 nightly build
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Atin Mukherjee
QA Contact:
URL:
Whiteboard:
: 1215078 (view as bug list)
Depends On:
Blocks: qe_tracker_everglades 1215518
TreeView+ depends on / blocked
 
Reported: 2015-04-20 09:35 UTC by Bhaskarakiran
Modified: 2016-11-23 23:13 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of:
: 1215518 (view as bug list)
Environment:
Last Closed: 2016-06-16 12:52:58 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
core file of the node crashed (663.14 KB, application/zip)
2015-04-20 09:35 UTC, Bhaskarakiran
no flags Details

Description Bhaskarakiran 2015-04-20 09:35:28 UTC
Created attachment 1016274 [details]
core file of the node crashed

Description of problem:
=======================

Glusterd crashed after updating the nightly build. Here are the steps that are done.

1. Packages are downloaded from http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.8dev-0.12.gitaa87c31.autobuild/
2. On a 4 node cluster, installed the rpms using yum install glusterfs*
3. One of the node started showing problems. It didn't list the volume when gluster volume status <volname> is given and asked to check the service.
4. Checked with service glusterd status, it showed glusterd dead but pid exists.
5. Tried to restart the glusterd service and stop the volume from another node and it crashed.

Version-Release number of selected component (if applicable):
==============================================================

[root@vertigo ~]# gluster --version
glusterfs 3.8dev built on Apr 19 2015 01:13:06
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

How reproducible:
=================
Tried once

Steps to Reproduce:
Same as in description.

Actual results:
===============
Glusterd crashed

Expected results:
=================
No crash should be seen

Additional info:
================
Attaching the corefile.

Comment 1 Bhaskarakiran 2015-04-20 09:45:29 UTC
Steps which i did and seen crash immediately after the gluster v start.

[root@interstellar /]# gluster v status testvol
Volume testvol is not started
[root@interstellar /]# service glusterd status
glusterd (pid  4474) is running...
[root@interstellar /]# gluster v stop testvol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: testvol: failed: Volume testvol is not in the started state
[root@interstellar /]# 
[root@interstellar /]# 
[root@interstellar /]# gluster v start testvol
Connection failed. Please check if gluster daemon is operational.
[root@interstellar /]# service glusterd status
glusterd dead but pid file exists
[root@interstellar /]#

Comment 2 Atin Mukherjee 2015-04-20 15:25:53 UTC
http://review.gluster.org/#/c/10304/ is posted for review

Comment 3 Anand Avati 2015-04-21 04:06:32 UTC
REVIEW: http://review.gluster.org/10304 (glusterd: initialize snapd svc at volume restore path) posted (#2) for review on master by Atin Mukherjee (amukherj)

Comment 4 Anand Avati 2015-04-21 04:52:25 UTC
REVIEW: http://review.gluster.org/10304 (glusterd: initialize snapd svc at volume restore path) posted (#3) for review on master by Atin Mukherjee (amukherj)

Comment 5 Anand Avati 2015-04-24 05:48:47 UTC
REVIEW: http://review.gluster.org/10304 (glusterd: initialize snapd svc at volume restore path) posted (#4) for review on master by Atin Mukherjee (amukherj)

Comment 6 Kaushal 2015-04-24 09:08:45 UTC
*** Bug 1215078 has been marked as a duplicate of this bug. ***

Comment 7 Anand Avati 2015-04-27 04:55:03 UTC
COMMIT: http://review.gluster.org/10304 committed in master by Kaushal M (kaushal) 
------
commit 18fd2fdd60839d737ab0ac64f33a444b54bdeee4
Author: Atin Mukherjee <amukherj>
Date:   Mon Apr 20 17:37:21 2015 +0530

    glusterd: initialize snapd svc at volume restore path
    
    In restore path snapd svc was not initialized because of which any glusterd
    instance which went down and came back may have uninitialized snapd svc. The
    reason I used 'may' is because depending on the nodes in the cluster. In a
    single node cluster this wouldn't be a problem since glusterd_spawn_daemon takes
    care of initializing it.
    
    Change-Id: I2da1e419a0506d3b2742c1cf39a3b9416eb3c305
    BUG: 1213295
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/10304
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System
    Reviewed-by: Kaushal M <kaushal>

Comment 8 Nagaprasad Sathyanarayana 2015-10-25 14:54:40 UTC
Fix for this bug is already made in a GlusterFS release. The cloned BZ has details of the fix and the release. Hence closing this mainline BZ.

Comment 9 Nagaprasad Sathyanarayana 2015-10-25 15:18:39 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 10 Niels de Vos 2016-06-16 12:52:58 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.