Bug 978704 - quota: gluster volume status should provide info about quotad
Summary: quota: gluster volume status should provide info about quotad
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
: ---
Assignee: Krutika Dhananjay
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-27 05:21 UTC by Saurabh
Modified: 2016-01-19 06:12 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.4.0.34rhs
Doc Type: Bug Fix
Doc Text:
Previously, volume status command did not display the status of quota daemon. Now with this update, the volume status command displays the quota daemon status.
Clone Of:
Environment:
Last Closed: 2013-11-27 15:26:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1769 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #1 2013-11-27 20:17:39 UTC

Description Saurabh 2013-06-27 05:21:10 UTC
Description of problem:
as in presently, 
gluster volume status command provides, status of all the running processes on the cluster.
It should include the quotad also into consideration.

[root@quota1 ~]# gluster volume status dist-rep 
Status of volume: dist-rep
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.98:/rhs/bricks/d1r1			49158	Y	5813
Brick 10.70.37.174:/rhs/bricks/d1r2			49158	Y	2544
Brick 10.70.37.136:/rhs/bricks/d2r1			49158	Y	2542
Brick 10.70.37.168:/rhs/bricks/d2r2			49158	Y	2633
Brick 10.70.37.98:/rhs/bricks/d3r1			49159	Y	5822
Brick 10.70.37.174:/rhs/bricks/d3r2			49159	Y	2553
Brick 10.70.37.136:/rhs/bricks/d4r1			49159	Y	2551
Brick 10.70.37.168:/rhs/bricks/d4r2			49159	Y	2642
Brick 10.70.37.98:/rhs/bricks/d5r1			49160	Y	5831
Brick 10.70.37.174:/rhs/bricks/d5r2			49160	Y	2562
Brick 10.70.37.136:/rhs/bricks/d6r1			49160	Y	2560
Brick 10.70.37.168:/rhs/bricks/d6r2			49160	Y	2651
NFS Server on localhost					2049	Y	18777
Self-heal Daemon on localhost				N/A	Y	18784
NFS Server on 6f39ec04-96fb-4aa7-b31a-29f11d34dac1	2049	Y	14804
Self-heal Daemon on 6f39ec04-96fb-4aa7-b31a-29f11d34dac
1							N/A	Y	14814
NFS Server on 8ba345c1-0723-4c6c-a380-35f2d4c706c7	2049	Y	14759
Self-heal Daemon on 8ba345c1-0723-4c6c-a380-35f2d4c706c
7							N/A	Y	14766
NFS Server on 54fb720d-b816-4e2a-833c-7fbffc5a5363	2049	Y	14838
Self-heal Daemon on 54fb720d-b816-4e2a-833c-7fbffc5a536
3							N/A	Y	14846
 
There are no active volume tasks



also, if we wish add a separate option for quotad in volume status like for other processes 
volume status [all | <VOLNAME> [nfs|shd|<BRICK>]] [detail|clients|mem|inode|fd|callpool] - display status of all or specified volume(s)/brick

will be good

Version-Release number of selected component (if applicable):
[root@quota1 ~]# rpm -qa | grep glusterfs
glusterfs-3.4rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4rhs-1.el6rhs.x86_64
glusterfs-server-3.4rhs-1.el6rhs.x86_64

Comment 2 Krutika Dhananjay 2013-08-29 09:06:18 UTC
This bug had been fixed as part of quota build 1. Hence moving the state of the bug to ON_QA. Now volume status displays the status of quota daemons every node in the cluster.

Comment 4 Gowrishankar Rajaiyan 2013-10-08 12:07:10 UTC
[root@ninja ~]#  gluster volume status vmstore
Status of volume: vmstore
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.34.68:/rhs1/vmstore				49152	Y	24767
Brick 10.70.34.56:/rhs1/vmstore				49152	Y	21672
NFS Server on localhost					2049	Y	27415
Self-heal Daemon on localhost				N/A	Y	27424
Quota Daemon on localhost				N/A	Y	27491
NFS Server on 10.70.34.56				2049	Y	24022
Self-heal Daemon on 10.70.34.56				N/A	Y	24031
Quota Daemon on 10.70.34.56				N/A	Y	24083
 
There are no active volume tasks
[root@ninja ~]# 


[root@ninja ~]# gluster volume quota snapstore disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success
[root@ninja ~]#  gluster volume status 
Status of volume: snapstore
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.34.68:/rhs2/snapstore			49153	Y	27400
Brick 10.70.34.56:/rhs2/snapstore			49153	Y	24007
NFS Server on localhost					2049	Y	27415
Self-heal Daemon on localhost				N/A	Y	27424
NFS Server on 10.70.34.56				2049	Y	24022
Self-heal Daemon on 10.70.34.56				N/A	Y	24031
 
There are no active volume tasks




Verified. 
Version: glusterfs-server-3.4.0.34rhs-1.el6rhs.x86_64

Comment 5 errata-xmlrpc 2013-11-27 15:26:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html


Note You need to log in before you can comment on or make changes to this bug.