Bug 1217311 - Disperse volume: gluster volume status doesn't show shd status
Summary: Disperse volume: gluster volume status doesn't show shd status
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ashish Pandey
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: qe_tracker_everglades 1224195 1228216
TreeView+ depends on / blocked
 
Reported: 2015-04-30 05:37 UTC by Bhaskarakiran
Modified: 2016-11-23 23:12 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of:
: 1224195 1228216 (view as bug list)
Environment:
Last Closed: 2016-06-16 12:56:44 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Bhaskarakiran 2015-04-30 05:37:29 UTC
Description of problem:
=======================

Gluster volume status command doesn't list the shd status for an ec volume even after enabling it.

[root@vertigo ~]# gluster v status testvol
Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick vertigo:/rhs/brick1/b1                49170     0          Y       23251
Brick ninja:/rhs/brick1/b2                  49166     0          Y       30404
Brick transformers:/rhs/brick1/b3           49160     0          Y       27340
Brick interstellar:/rhs/brick1/b4           49160     0          Y       29854
Brick vertigo:/rhs/brick2/b5                49171     0          Y       23269
Brick ninja:/rhs/brick2/b6                  49167     0          Y       30421
Brick transformers:/rhs/brick2/b7           49161     0          Y       27357
Brick interstellar:/rhs/brick2/b8           49161     0          Y       29871
Brick vertigo:/rhs/brick3/b9                49172     0          Y       20391
Brick ninja:/rhs/brick3/b10                 49168     0          Y       30438
Brick transformers:/rhs/brick3/b11          49162     0          Y       27374
Brick interstellar:/rhs/brick3/b12          49162     0          Y       29888
Brick vertigo:/rhs/brick4/b13               49174     0          Y       21396
Brick ninja:/rhs/brick4/b14                 49170     0          Y       31147
Brick transformers:/rhs/brick4/b15          49164     0          Y       28119
Brick interstellar:/rhs/brick4/b16          49164     0          Y       30528
Brick vertigo:/rhs/brick1/b17               49175     0          Y       21415
Brick ninja:/rhs/brick1/b18                 49171     0          Y       31166
Brick transformers:/rhs/brick1/b19          49165     0          Y       28138
Brick interstellar:/rhs/brick1/b20          49165     0          Y       30547
Brick vertigo:/rhs/brick2/b21               49176     0          Y       21435
Brick ninja:/rhs/brick2/b22                 49172     0          Y       31185
Brick transformers:/rhs/brick2/b23          49166     0          Y       28157
Brick interstellar:/rhs/brick2/b24          49166     0          Y       30566
Snapshot Daemon on localhost                49173     0          Y       20475
NFS Server on localhost                     2049      0          Y       24081
Quota Daemon on localhost                   N/A       N/A        Y       24135
Snapshot Daemon on transformers             49163     0          Y       27470
NFS Server on transformers                  2049      0          Y       30403
Quota Daemon on transformers                N/A       N/A        Y       30477
Snapshot Daemon on ninja                    49169     0          Y       30522
NFS Server on ninja                         N/A       N/A        N       N/A  
Quota Daemon on ninja                       N/A       N/A        Y       914  
Snapshot Daemon on interstellar             49163     0          Y       29975
NFS Server on interstellar                  2049      0          Y       32709
Quota Daemon on interstellar                N/A       N/A        Y       32764
 
Task Status of Volume testvol
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 08f87c28-cbcc-41eb-acab-09924f6dcd63
Status               : in progress         
 
[root@vertigo ~]# 

[root@vertigo ~]# gluster v info testvol
 
Volume Name: testvol
Type: Distributed-Disperse
Volume ID: e7979f7a-69c8-40ce-8541-2931fbf37d23
Status: Started
Number of Bricks: 2 x (8 + 4) = 24
Transport-type: tcp
Bricks:
Brick1: vertigo:/rhs/brick1/b1
Brick2: ninja:/rhs/brick1/b2
Brick3: transformers:/rhs/brick1/b3
Brick4: interstellar:/rhs/brick1/b4
Brick5: vertigo:/rhs/brick2/b5
Brick6: ninja:/rhs/brick2/b6
Brick7: transformers:/rhs/brick2/b7
Brick8: interstellar:/rhs/brick2/b8
Brick9: vertigo:/rhs/brick3/b9
Brick10: ninja:/rhs/brick3/b10
Brick11: transformers:/rhs/brick3/b11
Brick12: interstellar:/rhs/brick3/b12
Brick13: vertigo:/rhs/brick4/b13
Brick14: ninja:/rhs/brick4/b14
Brick15: transformers:/rhs/brick4/b15
Brick16: interstellar:/rhs/brick4/b16
Brick17: vertigo:/rhs/brick1/b17
Brick18: ninja:/rhs/brick1/b18
Brick19: transformers:/rhs/brick1/b19
Brick20: interstellar:/rhs/brick1/b20
Brick21: vertigo:/rhs/brick2/b21
Brick22: ninja:/rhs/brick2/b22
Brick23: transformers:/rhs/brick2/b23
Brick24: interstellar:/rhs/brick2/b24
Options Reconfigured:
features.uss: on
features.quota: on
server.event-threads: 3
client.event-threads: 4
cluster.disperse-self-heal-daemon: enable
[root@vertigo ~]# 

Version-Release number of selected component (if applicable):
=============================================================

[root@vertigo ~]# gluster --version
glusterfs 3.8dev built on Apr 28 2015 14:47:20
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@vertigo ~]# 

How reproducible:
=================
100%

Actual results:


Expected results:


Additional info:

Comment 1 Anand Avati 2015-05-14 06:06:34 UTC
REVIEW: http://review.gluster.org/10764 ( Added support to get status of Self Heal Daemon  for disperse volume. ("gluster volume status")) posted (#2) for review on master by Ashish Pandey (aspandey)

Comment 2 Anand Avati 2015-05-14 09:27:28 UTC
REVIEW: http://review.gluster.org/10764 ( Added support to get status of Self Heal Daemon  for disperse volume. ("gluster volume status")) posted (#3) for review on master by Ashish Pandey (aspandey)

Comment 5 Anand Avati 2015-06-10 11:04:14 UTC
REVIEW: http://review.gluster.org/10764 ( glusterd : Display status of Self Heal Daemon for disperse volume) posted (#4) for review on master by Ashish Pandey (aspandey)

Comment 6 Anand Avati 2015-08-25 17:48:27 UTC
COMMIT: http://review.gluster.org/10764 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 116e3dd3d7687a785b0d04a8afd11619d85ff4ec
Author: Ashish Pandey <aspandey>
Date:   Wed May 13 14:48:42 2015 +0530

     glusterd : Display status of Self Heal Daemon for disperse volume
    
     Problem : Status of Self Heal Daemon is not
     displayed in "gluster volume status"
    
     As disperse volumes are self heal compatible,
     show the status of self heal daemon in gluster
     volume status command
    
    Change-Id: I83d3e6a2fd122b171f15cfd76ce8e6b6e00f92e2
    BUG: 1217311
    Signed-off-by: Ashish Pandey <aspandey>
    Reviewed-on: http://review.gluster.org/10764
    Reviewed-by: Xavier Hernandez <xhernandez>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 7 Mike McCune 2016-03-28 22:16:42 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 8 Niels de Vos 2016-06-16 12:56:44 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.