Bug 811539 - volume status o/p incorrect
volume status o/p incorrect
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
pre-2.0
Unspecified Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Kaushal
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-04-11 07:21 EDT by Ujjwala
Modified: 2013-08-19 20:09 EDT (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:21:16 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ujjwala 2012-04-11 07:21:01 EDT
Description of problem:
When one of the volumes is stopped, the "gluster volume status" command gives incorrect output. It does not list all the volumes.
For Ex: In the cluster of 2 nodes, I have 5 volumes- dht, rep, dis-rep, new, omg. when I stop one of the volume and do a "gluster volume status", following is the output.

Node 1: does not list anything
[root@gqac001 ~]# gluster volume status
Volume rep is not started

Node 2: lists only 3 volumes

[root@gqac002 ~]# gluster volume status

Status of volume: new
Gluster process                                                Port        Online        Pid
------------------------------------------------------------------------------
Brick 10.16.157.0:/home/bricks/new/b1                        24021        Y        25077
Brick 10.16.157.3:/home/bricks/new/b1                        24022        Y        14671
NFS Server on localhost                                        38467        Y        15042
Self-heal Daemon on localhost                                N/A        Y        15048
Brick 10.16.157.0:/home/bricks/new/b2                        24022        N        N/A
NFS Server on 10.16.157.0                                38467        Y        25424

Status of volume: omg
Gluster process                                                Port        Online        Pid
------------------------------------------------------------------------------
Brick 10.16.157.0:/home/bricks/omg/b1                        24024        Y        25364
Brick 10.16.157.3:/home/bricks/omg/b1                        24021        Y        14996
NFS Server on localhost                                        38467        Y        15042
Self-heal Daemon on localhost                                N/A        Y        15048
Brick 10.16.157.0:/home/bricks/omg/b2                        24025        Y        25369
NFS Server on 10.16.157.0                                38467        Y        25424

Status of volume: dis-rep
Gluster process                                                Port        Online        Pid
------------------------------------------------------------------------------
Brick 10.16.157.0:/home/bricks/dis-rep/b1                24016        Y        25104
Brick 10.16.157.3:/home/bricks/dis-rep/b1                24016        Y        14685
Brick 10.16.157.0:/home/bricks/dis-rep/b2                24017        Y        25112
Brick 10.16.157.3:/home/bricks/dis-rep/b2                24017        Y        14694
Brick 10.16.157.0:/home/bricks/dis-rep/b3                24019        Y        25121
Brick 10.16.157.3:/home/bricks/dis-rep/b3                24019        Y        14703
NFS Server on localhost                                        38467        Y        15042
Self-heal Daemon on localhost                                N/A        Y        15048
NFS Server on 10.16.157.0                                38467        Y        25424
Self-heal Daemon on 10.16.157.0                                N/A        Y        25430
Volume rep is not started
---------------------------------------------------------------------------

Version-Release number of selected component (if applicable):
3.3.0 qa34

How reproducible:
Tried 4-5 times

Steps to Reproduce:
1. Installed 3.3.0qa34 build.
2. Killed all the gluster processes and restarted glusterd
3. Stop one of the volume and so 'gluster volume status'
  
Actual results:
Volume status shows incorrect info

Expected results:
Volume status should list all the five volumes.


Additional info:
Comment 1 Anand Avati 2012-04-17 09:21:13 EDT
CHANGE: http://review.gluster.com/3130 (cli: Fix for "volume status all") merged in master by Vijay Bellur (vijay@gluster.com)

Note You need to log in before you can comment on or make changes to this bug.