Created attachment 1016274 [details] core file of the node crashed Description of problem: ======================= Glusterd crashed after updating the nightly build. Here are the steps that are done. 1. Packages are downloaded from http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.8dev-0.12.gitaa87c31.autobuild/ 2. On a 4 node cluster, installed the rpms using yum install glusterfs* 3. One of the node started showing problems. It didn't list the volume when gluster volume status <volname> is given and asked to check the service. 4. Checked with service glusterd status, it showed glusterd dead but pid exists. 5. Tried to restart the glusterd service and stop the volume from another node and it crashed. Version-Release number of selected component (if applicable): ============================================================== [root@vertigo ~]# gluster --version glusterfs 3.8dev built on Apr 19 2015 01:13:06 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. How reproducible: ================= Tried once Steps to Reproduce: Same as in description. Actual results: =============== Glusterd crashed Expected results: ================= No crash should be seen Additional info: ================ Attaching the corefile.
Steps which i did and seen crash immediately after the gluster v start. [root@interstellar /]# gluster v status testvol Volume testvol is not started [root@interstellar /]# service glusterd status glusterd (pid 4474) is running... [root@interstellar /]# gluster v stop testvol Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: testvol: failed: Volume testvol is not in the started state [root@interstellar /]# [root@interstellar /]# [root@interstellar /]# gluster v start testvol Connection failed. Please check if gluster daemon is operational. [root@interstellar /]# service glusterd status glusterd dead but pid file exists [root@interstellar /]#
http://review.gluster.org/#/c/10304/ is posted for review
REVIEW: http://review.gluster.org/10304 (glusterd: initialize snapd svc at volume restore path) posted (#2) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/10304 (glusterd: initialize snapd svc at volume restore path) posted (#3) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/10304 (glusterd: initialize snapd svc at volume restore path) posted (#4) for review on master by Atin Mukherjee (amukherj)
*** Bug 1215078 has been marked as a duplicate of this bug. ***
COMMIT: http://review.gluster.org/10304 committed in master by Kaushal M (kaushal) ------ commit 18fd2fdd60839d737ab0ac64f33a444b54bdeee4 Author: Atin Mukherjee <amukherj> Date: Mon Apr 20 17:37:21 2015 +0530 glusterd: initialize snapd svc at volume restore path In restore path snapd svc was not initialized because of which any glusterd instance which went down and came back may have uninitialized snapd svc. The reason I used 'may' is because depending on the nodes in the cluster. In a single node cluster this wouldn't be a problem since glusterd_spawn_daemon takes care of initializing it. Change-Id: I2da1e419a0506d3b2742c1cf39a3b9416eb3c305 BUG: 1213295 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/10304 Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal>
Fix for this bug is already made in a GlusterFS release. The cloned BZ has details of the fix and the release. Hence closing this mainline BZ.
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user