Bug 1020929 - "volume quota <volume_name> list" doesn't give any output when bricks and glusterfs processes are not running on the node
Summary: "volume quota <volume_name> list" doesn't give any output when bricks and glu...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: 2.1
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: ---
Assignee: Vijaikumar Mallikarjuna
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-18 14:26 UTC by spandura
Modified: 2016-09-17 12:41 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-04-20 09:02:10 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2013-10-18 14:26:17 UTC
Description of problem:
=======================
In a 6 x 2 distribute-replicate volume with 4 storage nodes ( node1, node2, node3, node4) and quota enabled on the volume , the node2's and node4's glusterfs/glusterfsd process were killed. 

Subsequently executed "gluster volume quota <volume_name> list" on node2 where the glusterfs and glusterfsd process are killed . The command execution doesn't give any output. 

Version-Release number of selected component (if applicable):
===============================================================
glusterfs 3.4.0.35rhs built on Oct 15 2013 14:06:04

How reproducible:
================
Often

Steps to Reproduce:
===================
1. Create a 6 x 2 distribute-replicate volume from 4 nodes (3 bricks on each node). Start the volume.

2. Create a fuse mount 

3. From fuse mount start creating directories/files. 

4. From one of the nodes enable quota. Start setting limit on the directories from node1. 

5. While setting quota limits, killall glusterfs, glusterfsd process from node2 and node4. 

6. execute, "gluster v quota <volume_name> list" from node2.  

Actual results:
================
Doesn't give any output. 

root@rhs-client12 [Oct-18-2013-13:45:39] >gluster v quota vol_dis_rep list 
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------


^C
root@rhs-client12 [Oct-18-2013-14:17:19] >gluster v quota vol_dis_rep list 
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
^C
root@rhs-client12 [Oct-18-2013-14:17:31] >gluster v quota vol_dis_rep list 
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------


Expected results:
===================
Should display disk limit information of all the directories on which limit is set

Additional info:
==================

root@rhs-client11 [Oct-18-2013-14:18:50] >gluster v info vol_dis_rep
 
Volume Name: vol_dis_rep
Type: Distributed-Replicate
Volume ID: 9b24b8e4-9e7d-4fca-936a-c716616001cc
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: rhs-client11:/rhs/bricks/brick1
Brick2: rhs-client12:/rhs/bricks/brick2
Brick3: rhs-client13:/rhs/bricks/brick3
Brick4: rhs-client14:/rhs/bricks/brick4
Brick5: rhs-client11:/rhs/bricks/brick5
Brick6: rhs-client12:/rhs/bricks/brick6
Brick7: rhs-client13:/rhs/bricks/brick7
Brick8: rhs-client14:/rhs/bricks/brick8
Brick9: rhs-client11:/rhs/bricks/brick9
Brick10: rhs-client12:/rhs/bricks/brick10
Brick11: rhs-client13:/rhs/bricks/brick11
Brick12: rhs-client14:/rhs/bricks/brick12
Options Reconfigured:
features.quota: on

root@rhs-client11 [Oct-18-2013-14:18:56] >gluster v status vol_dis_rep
Status of volume: vol_dis_rep
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick rhs-client11:/rhs/bricks/brick1			49153	Y	6932
Brick rhs-client12:/rhs/bricks/brick2			N/A	N	N/A
Brick rhs-client13:/rhs/bricks/brick3			49152	Y	26741
Brick rhs-client14:/rhs/bricks/brick4			N/A	N	N/A
Brick rhs-client11:/rhs/bricks/brick5			49154	Y	6943
Brick rhs-client12:/rhs/bricks/brick6			N/A	N	N/A
Brick rhs-client13:/rhs/bricks/brick7			49153	Y	26752
Brick rhs-client14:/rhs/bricks/brick8			N/A	N	N/A
Brick rhs-client11:/rhs/bricks/brick9			49155	Y	6954
Brick rhs-client12:/rhs/bricks/brick10			N/A	N	N/A
Brick rhs-client13:/rhs/bricks/brick11			49154	Y	26763
Brick rhs-client14:/rhs/bricks/brick12			N/A	N	N/A
NFS Server on localhost					2049	Y	6966
Self-heal Daemon on localhost				N/A	Y	6975
Quota Daemon on localhost				N/A	Y	7690
NFS Server on rhs-client12				N/A	N	N/A
Self-heal Daemon on rhs-client12			N/A	N	N/A
Quota Daemon on rhs-client12				N/A	N	N/A
NFS Server on rhs-client14				N/A	N	N/A
Self-heal Daemon on rhs-client14			N/A	N	N/A
Quota Daemon on rhs-client14				N/A	N	N/A
NFS Server on rhs-client13				2049	Y	26776
Self-heal Daemon on rhs-client13			N/A	Y	26785
Quota Daemon on rhs-client13				N/A	Y	27447

Comment 2 Vivek Agarwal 2015-04-20 09:02:10 UTC
Closing this per discussion with VijayM, not able to repro this. Please reopen if this is seen again


Note You need to log in before you can comment on or make changes to this bug.