Bug 1316116 - glusterd crash after running volume status detail command
glusterd crash after running volume status detail command
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.5.7
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Atin Mukherjee
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-03-09 07:59 EST by michael.martel
Modified: 2016-06-17 12:22 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.5.9
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-04-14 14:40:00 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
logfile from gluster server (17.07 KB, application/x-gzip)
2016-03-09 07:59 EST, michael.martel
no flags Details
stack trace from logfile (1.49 KB, text/plain)
2016-03-09 08:00 EST, michael.martel
no flags Details

  None (edit)
Description michael.martel 2016-03-09 07:59:50 EST
Created attachment 1134513 [details]
logfile from gluster server

Description of problem:

After running volume status VOLUME detail , the gluster daemon terminates.

root@gs1:~# gluster volume status gv0 detail
Connection failed. Please check if gluster daemon is operational.


Version-Release number of selected component (if applicable):

3.5.7
Running on Debian Wheezy


How reproducible:

root@gs1:~# gluster volume status gv0 detail
Connection failed. Please check if gluster daemon is operational.


Steps to Reproduce:
1. root@gs1:~# gluster volume status gv0 detail
Connection failed. Please check if gluster daemon is operational.
2.
3.

Actual results:

root@gs1:~# gluster volume status gv0 detail
Connection failed. Please check if gluster daemon is operational.


Expected results:

unknown.  I have never seen the output from the detail command.


Additional info:
Comment 1 michael.martel 2016-03-09 08:00 EST
Created attachment 1134514 [details]
stack trace from logfile
Comment 2 Vijay Bellur 2016-03-10 00:01:16 EST
REVIEW: http://review.gluster.org/13661 (glusterd: fix glusterd_add_inode_size_to_dict to avoid glusterd to crash) posted (#1) for review on release-3.5 by Atin Mukherjee (amukherj@redhat.com)
Comment 3 Vijay Bellur 2016-03-10 00:42:24 EST
REVIEW: http://review.gluster.org/13661 (glusterd: fix glusterd_add_inode_size_to_dict to avoid glusterd to crash) posted (#2) for review on release-3.5 by Niels de Vos (ndevos@redhat.com)
Comment 4 Vijay Bellur 2016-03-11 12:00:12 EST
COMMIT: http://review.gluster.org/13661 committed in release-3.5 by Niels de Vos (ndevos@redhat.com) 
------
commit 173eb1a056daef79cd593290250211d86de2cb82
Author: Atin Mukherjee <amukherj@redhat.com>
Date:   Thu Mar 10 10:19:44 2016 +0530

    glusterd: fix glusterd_add_inode_size_to_dict to avoid glusterd to crash
    
    Backport of
            http://review.gluster.org/#/c/8134/
            http://review.gluster.org/#/c/8492/
    
    There were couple of backports from mainline got missed and due to which
    glusterd crashes if the underlying file system doesn't fail under list of
    supported file systems. This patch takes care of handling this negative
    scenario.
    
    Reported-by: Michael Martel <michael.martel@vsc.edu>
    Change-Id: I6f601a4421869bbd7fc26e31f4ca4ffe075c0924
    BUG: 1316116
    Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
    Reviewed-on: http://review.gluster.org/13661
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Niels de Vos <ndevos@redhat.com>
Comment 5 Niels de Vos 2016-04-14 14:40:00 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.9, please open a new bug report.

glusterfs-3.5.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/announce/2016-April/000055.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
Comment 6 Niels de Vos 2016-06-17 12:22:42 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.9, please open a new bug report.

glusterfs-3.5.9 has been announced on the Gluster mailinglists [1] a while back, and packages for several distributions should be available by now.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.maintainers/486

Note You need to log in before you can comment on or make changes to this bug.