Bug 811539 - volume status o/p incorrect
Summary: volume status o/p incorrect
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: pre-2.0
Hardware: Unspecified
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-04-11 11:21 UTC by Ujjwala
Modified: 2013-08-20 00:09 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:21:16 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Ujjwala 2012-04-11 11:21:01 UTC
Description of problem:
When one of the volumes is stopped, the "gluster volume status" command gives incorrect output. It does not list all the volumes.
For Ex: In the cluster of 2 nodes, I have 5 volumes- dht, rep, dis-rep, new, omg. when I stop one of the volume and do a "gluster volume status", following is the output.

Node 1: does not list anything
[root@gqac001 ~]# gluster volume status
Volume rep is not started

Node 2: lists only 3 volumes

[root@gqac002 ~]# gluster volume status

Status of volume: new
Gluster process                                                Port        Online        Pid
------------------------------------------------------------------------------
Brick 10.16.157.0:/home/bricks/new/b1                        24021        Y        25077
Brick 10.16.157.3:/home/bricks/new/b1                        24022        Y        14671
NFS Server on localhost                                        38467        Y        15042
Self-heal Daemon on localhost                                N/A        Y        15048
Brick 10.16.157.0:/home/bricks/new/b2                        24022        N        N/A
NFS Server on 10.16.157.0                                38467        Y        25424

Status of volume: omg
Gluster process                                                Port        Online        Pid
------------------------------------------------------------------------------
Brick 10.16.157.0:/home/bricks/omg/b1                        24024        Y        25364
Brick 10.16.157.3:/home/bricks/omg/b1                        24021        Y        14996
NFS Server on localhost                                        38467        Y        15042
Self-heal Daemon on localhost                                N/A        Y        15048
Brick 10.16.157.0:/home/bricks/omg/b2                        24025        Y        25369
NFS Server on 10.16.157.0                                38467        Y        25424

Status of volume: dis-rep
Gluster process                                                Port        Online        Pid
------------------------------------------------------------------------------
Brick 10.16.157.0:/home/bricks/dis-rep/b1                24016        Y        25104
Brick 10.16.157.3:/home/bricks/dis-rep/b1                24016        Y        14685
Brick 10.16.157.0:/home/bricks/dis-rep/b2                24017        Y        25112
Brick 10.16.157.3:/home/bricks/dis-rep/b2                24017        Y        14694
Brick 10.16.157.0:/home/bricks/dis-rep/b3                24019        Y        25121
Brick 10.16.157.3:/home/bricks/dis-rep/b3                24019        Y        14703
NFS Server on localhost                                        38467        Y        15042
Self-heal Daemon on localhost                                N/A        Y        15048
NFS Server on 10.16.157.0                                38467        Y        25424
Self-heal Daemon on 10.16.157.0                                N/A        Y        25430
Volume rep is not started
---------------------------------------------------------------------------

Version-Release number of selected component (if applicable):
3.3.0 qa34

How reproducible:
Tried 4-5 times

Steps to Reproduce:
1. Installed 3.3.0qa34 build.
2. Killed all the gluster processes and restarted glusterd
3. Stop one of the volume and so 'gluster volume status'
  
Actual results:
Volume status shows incorrect info

Expected results:
Volume status should list all the five volumes.


Additional info:

Comment 1 Anand Avati 2012-04-17 13:21:13 UTC
CHANGE: http://review.gluster.com/3130 (cli: Fix for "volume status all") merged in master by Vijay Bellur (vijay@gluster.com)


Note You need to log in before you can comment on or make changes to this bug.