Bug 1001895

Summary: [RFE] quota buid 3: list command should give some reponse if limit is not set
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Saurabh <saujain>
Component: glusterdAssignee: Kaushal <kaushal>
Status: CLOSED ERRATA QA Contact: Saurabh <saujain>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.1CC: grajaiya, kaushal, kparthas, mzywusko, psriniva, rhs-bugs, vagarwal, vbellur
Target Milestone: ---Keywords: FutureFeature, ZStream
Target Release: RHGS 2.1.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Previously, the quota list command would give incorrect and inconsistent output is certain cases. With the fixes for this bug, the quota list command works consistently and as expected.
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-02-25 07:35:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Saurabh 2013-08-28 04:49:31 UTC
Description of problem:
presently the quota list throws another shell prompt, if the limit is not set on any of the path.
[root@rhsauto032 glusterd]# gluster volume quota dist-rep3 list
[root@rhsauto032 glusterd]# 


Whereas, it should send some response like
"Need to set quota limit"

Version-Release number of selected component (if applicable):
glusterfs-libs-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-api-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-server-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.20rhsquota5-1.el6rhs.x86_64


How reproducible:
always

Comment 7 Gowrishankar Rajaiyan 2013-10-08 10:39:55 UTC
[root@ninja ~]# gluster volume quota snapstore list 
quota command failed : Volume snapstore is not started.
quota: No quota configured on volume snapstore
[root@ninja ~]#


[root@ninja ~]# gluster volume quota snapstore list /
quota command failed : Volume snapstore is not started.
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        No such file or directory
[root@ninja ~]# 

Version: glusterfs-server-3.4.0.34rhs-1.el6rhs.x86_64

Comment 9 Gowrishankar Rajaiyan 2013-10-08 11:10:02 UTC
1. When quota is not configured, then irrespective of what path you give for list, it should just say "quota: No quota configured on volume vol-name".


(In reply to Kaushal from comment #8)
> @Shanks,
> I'm fixing the issues with cli attempting to print lists when volume is not
> started or quota is not enabled as for bug-1000936, which should be
> available in the next build. Till then for verifying this bug, please start
> the volume and enable quota on it.
> 
> Can you do this and reverify again?


2. When quota is enabled:
[root@ninja ~]# gluster volume start snapstore
volume start: snapstore: success
[root@ninja ~]#

[root@ninja ~]# gluster volume quota snapstore enable
volume quota : success

[root@ninja ~]# gluster volume quota snapstore list /
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        No such file or directory
[root@ninja ~]# 


still fails QA. "/" does exist since it is the base, but "list" states "No such file or directory".

Version: glusterfs-server-3.4.0.34rhs-1.el6rhs.x86_64

Comment 10 Kaushal 2013-10-08 11:41:48 UTC
Regarding 1,
As I said earlier, I'm working on another patch which does just that, as a fix for bug-1000936. I'll not be making any changes for it wrt this bug report.

As for 2,

I'm getting the correct output, 'Limit not set' for '/'.

[root@minion2 ~]# gluster volume start test
volume start: test: success

[root@minion2 ~]# gluster volume quota test enable
volume quota : success

[root@minion2 ~]# gluster volume quota test list /
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        Limit not set

[root@minion2 ~]# gluster volume info test
 
Volume Name: test
Type: Distribute
Volume ID: 78c9ce65-43d6-4b6f-b6b5-b3e2035fd18c
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: minion2.usersys.redhat.com:/brick/test
Options Reconfigured:
features.quota: on

[root@minion2 ~]# gluster --version
glusterfs 3.4.0.34rhs built on Oct  7 2013 13:34:52
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.


The volume is used is a single node, 1 brick only volume, but that shouldn't affect what the fix for this patch does. Can you give more info on your setup?

Comment 11 Gowrishankar Rajaiyan 2013-10-08 17:48:54 UTC
I don't see it on a new volume. :(

[root@ninja ~]# gluster volume create foo 10.70.34.68:/rhs2/foo
volume create: foo: success: please start the volume to access data
[root@ninja ~]# gluster vol start foo
volume start: foo: success
[root@ninja ~]# gluster vol quota foo enable
volume quota : success
[root@ninja ~]# gluster vol quota foo list /
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        Limit not set
[root@ninja ~]#

Comment 12 Gowrishankar Rajaiyan 2013-10-08 19:13:01 UTC
Hopefully this throws some light:

[root@ninja ~]# gluster vol stop snapstore 
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: snapstore: failed: geo-replication sessions are active for the volume 'snapstore'.
Use 'volume geo-replication status' command for more info. Use 'force' option to ignore and stop the volume.
[root@ninja ~]#

[root@ninja ~]# gluster vol stop snapstore force
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: snapstore: success
[root@ninja ~]#

[root@ninja ~]# gluster volume quota snapstore list /
quota command failed : Volume snapstore is not started.
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        Transport endpoint is not connected
[root@ninja ~]#


Logs:
==> /var/log/glusterfs/quota-mount-snapstore.log <==
[2013-10-08 10:54:43.616653] W [socket.c:522:__socket_rwv] 0-snapstore-client-0: readv on 10.70.34.68:49153 failed (No data available)
[2013-10-08 10:54:43.616715] I [client.c:2103:client_rpc_notify] 0-snapstore-client-0: disconnected from 10.70.34.68:49153. Client process will keep trying to connect to glusterd until brick's port is available. 

==> /var/log/glusterfs/quotad.log <==
[2013-10-08 10:54:43.616668] W [socket.c:522:__socket_rwv] 0-snapstore-client-0: readv on 10.70.34.68:49153 failed (No data available)
[2013-10-08 10:54:43.616734] I [client.c:2103:client_rpc_notify] 0-snapstore-client-0: disconnected from 10.70.34.68:49153. Client process will keep trying to connect to glusterd until brick's port is available. 

==> /var/log/glusterfs/quota-mount-snapstore.log <==

==> /var/log/glusterfs/quotad.log <==
[2013-10-08 10:54:45.660992] W [glusterfsd.c:1062:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x3997ae894d] (-->/lib64/libpthread.so.0() [0x3998207851] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x4053cd]))) 0-: received signum (15), shutting down
[2013-10-08 10:54:46.667199] I [glusterfsd.c:1988:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.0.34rhs (/usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/4ed26aeaa1eb6769fd3822d31bb8f487.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off)

==> /var/log/glusterfs/quota-mount-snapstore.log <==
[2013-10-08 10:54:46.667990] W [socket.c:522:__socket_rwv] 0-snapstore-client-1: readv on 10.70.34.56:49153 failed (No data available)
[2013-10-08 10:54:46.668048] I [client.c:2103:client_rpc_notify] 0-snapstore-client-1: disconnected from 10.70.34.56:49153. Client process will keep trying to connect to glusterd until brick's port is available. 
[2013-10-08 10:54:46.668067] E [afr-common.c:3832:afr_notify] 0-snapstore-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.

==> /var/log/glusterfs/quotad.log <==
[2013-10-08 10:54:46.669255] I [socket.c:3487:socket_init] 0-socket.glusterfsd: SSL support is NOT enabled
[2013-10-08 10:54:46.669281] I [socket.c:3502:socket_init] 0-socket.glusterfsd: using system polling thread
[2013-10-08 10:54:46.669377] I [socket.c:3487:socket_init] 0-glusterfs: SSL support is NOT enabled
[2013-10-08 10:54:46.669396] I [socket.c:3502:socket_init] 0-glusterfs: using system polling thread
[2013-10-08 10:54:46.675650] I [graph.c:239:gf_add_cmdline_options] 0-vmstore-replicate-0: adding option 'entry-self-heal' for volume 'vmstore-replicate-0' with value 'off'
[2013-10-08 10:54:46.675665] I [graph.c:239:gf_add_cmdline_options] 0-vmstore-replicate-0: adding option 'metadata-self-heal' for volume 'vmstore-replicate-0' with value 'off'
[2013-10-08 10:54:46.675672] I [graph.c:239:gf_add_cmdline_options] 0-vmstore-replicate-0: adding option 'data-self-heal' for volume 'vmstore-replicate-0' with value 'off'
[2013-10-08 10:54:46.676305] I [socket.c:3487:socket_init] 0-socket.quotad: SSL support is NOT enabled
[2013-10-08 10:54:46.676320] I [socket.c:3502:socket_init] 0-socket.quotad: using system polling thread
[2013-10-08 10:54:46.676418] I [dht-shared.c:311:dht_init_regex] 0-vmstore: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2013-10-08 10:54:46.678288] I [socket.c:3487:socket_init] 0-vmstore-client-1: SSL support is NOT enabled
[2013-10-08 10:54:46.678312] I [socket.c:3502:socket_init] 0-vmstore-client-1: using system polling thread
[2013-10-08 10:54:46.678812] I [socket.c:3487:socket_init] 0-vmstore-client-0: SSL support is NOT enabled
[2013-10-08 10:54:46.678827] I [socket.c:3502:socket_init] 0-vmstore-client-0: using system polling thread
[2013-10-08 10:54:46.678837] W [graph.c:314:_log_if_unknown_option] 0-quotad: option 'rpc-auth.auth-glusterfs' is not recognized
[2013-10-08 10:54:46.678848] W [graph.c:314:_log_if_unknown_option] 0-quotad: option 'rpc-auth.auth-unix' is not recognized
[2013-10-08 10:54:46.678856] W [graph.c:314:_log_if_unknown_option] 0-quotad: option 'rpc-auth.auth-null' is not recognized
[2013-10-08 10:54:46.678864] W [graph.c:314:_log_if_unknown_option] 0-quotad: option 'vmstore.volume-id' is not recognized
[2013-10-08 10:54:46.678882] I [client.c:2171:notify] 0-vmstore-client-0: parent translators are ready, attempting connect on transport
[2013-10-08 10:54:46.681208] I [client.c:2171:notify] 0-vmstore-client-1: parent translators are ready, attempting connect on transport
Final graph:
+------------------------------------------------------------------------------+
  1: volume vmstore-client-0
  2:     type protocol/client
  3:     option remote-host 10.70.34.68
  4:     option remote-subvolume /rhs1/vmstore
  5:     option transport-type socket
  6:     option username 2df54b1e-f6b5-4dc4-8d42-ed7cf05ce228
  7:     option password 0679c9ce-50c7-4479-b011-97da6e3e0b95
  8:     option filter-O_DIRECT enable
  9: end-volume
 10: 
 11: volume vmstore-client-1
 12:     type protocol/client
 13:     option remote-host 10.70.34.56
 14:     option remote-subvolume /rhs1/vmstore
 15:     option transport-type socket
 16:     option username 2df54b1e-f6b5-4dc4-8d42-ed7cf05ce228
 17:     option password 0679c9ce-50c7-4479-b011-97da6e3e0b95
 18:     option filter-O_DIRECT enable
 19: end-volume
 20: 
 21: volume vmstore-replicate-0
 22:     type cluster/replicate
 23:     option data-self-heal off
 24:     option metadata-self-heal off
 25:     option entry-self-heal off
 26:     option eager-lock enable
 27:     subvolumes vmstore-client-0 vmstore-client-1
 28: end-volume
 29: 
 30: volume vmstore
 31:     type cluster/distribute
 32:     subvolumes vmstore-replicate-0
 33: end-volume
 34: 
 35: volume quotad
 36:     type features/quotad
 37:     option rpc-auth.auth-glusterfs on
 38:     option rpc-auth.auth-unix on
 39:     option rpc-auth.auth-null on
 40:     option transport.socket.listen-path /tmp/quotad.socket
 41:     option transport-type socket
 42:     option transport.address-family unix
 43:     option vmstore.volume-id vmstore
 44:     subvolumes vmstore
 45: end-volume
 46: 
+------------------------------------------------------------------------------+
[2013-10-08 10:54:46.683805] I [rpc-clnt.c:1687:rpc_clnt_reconfig] 0-vmstore-client-0: changing port to 49152 (from 0)
[2013-10-08 10:54:46.686208] I [client-handshake.c:1676:select_server_supported_programs] 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2013-10-08 10:54:46.686504] I [client-handshake.c:1460:client_setvolume_cbk] 0-vmstore-client-0: 'server-pkg-version' key not found, handshaked with older client
[2013-10-08 10:54:46.686531] I [client-handshake.c:1474:client_setvolume_cbk] 0-vmstore-client-0: Connected to 10.70.34.68:49152, attached to remote volume '/rhs1/vmstore'.
[2013-10-08 10:54:46.686544] I [client-handshake.c:1486:client_setvolume_cbk] 0-vmstore-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2013-10-08 10:54:46.686613] I [afr-common.c:3795:afr_notify] 0-vmstore-replicate-0: Subvolume 'vmstore-client-0' came back up; going online.
[2013-10-08 10:54:46.686651] I [client-handshake.c:450:client_set_lk_version_cbk] 0-vmstore-client-0: Server lk version = 1
[2013-10-08 10:54:49.732063] I [rpc-clnt.c:1687:rpc_clnt_reconfig] 0-vmstore-client-1: changing port to 49152 (from 0)
[2013-10-08 10:54:49.734913] I [client-handshake.c:1676:select_server_supported_programs] 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2013-10-08 10:54:49.735166] I [client-handshake.c:1460:client_setvolume_cbk] 0-vmstore-client-1: 'server-pkg-version' key not found, handshaked with older client
[2013-10-08 10:54:49.735186] I [client-handshake.c:1474:client_setvolume_cbk] 0-vmstore-client-1: Connected to 10.70.34.56:49152, attached to remote volume '/rhs1/vmstore'.
[2013-10-08 10:54:49.735196] I [client-handshake.c:1486:client_setvolume_cbk] 0-vmstore-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2013-10-08 10:54:49.736625] I [client-handshake.c:450:client_set_lk_version_cbk] 0-vmstore-client-1: Server lk version = 1

==> /var/log/glusterfs/quota-mount-snapstore.log <==
[2013-10-08 10:54:54.523111] E [client-handshake.c:1759:client_query_portmap_cbk] 0-snapstore-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2013-10-08 10:54:54.523186] I [client.c:2103:client_rpc_notify] 0-snapstore-client-0: disconnected from 10.70.34.68:24007. Client process will keep trying to connect to glusterd until brick's port is available. 
[2013-10-08 10:54:55.246372] I [afr-common.c:3953:afr_local_init] 0-snapstore-replicate-0: no subvolumes up
[2013-10-08 10:54:55.246405] W [fuse-bridge.c:4160:fuse_xattr_cbk] 0-glusterfs-fuse: 19: GETXATTR(trusted.glusterfs.quota.limit-set) / => -1 (Transport endpoint is not connected)
[2013-10-08 10:54:57.526069] E [client-handshake.c:1759:client_query_portmap_cbk] 0-snapstore-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2013-10-08 10:54:57.526118] I [client.c:2103:client_rpc_notify] 0-snapstore-client-1: disconnected from 10.70.34.56:24007. Client process will keep trying to connect to glusterd until brick's port is available. 
[2013-10-08 10:54:58.354941] I [afr-common.c:3953:afr_local_init] 0-snapstore-replicate-0: no subvolumes up
[2013-10-08 10:54:58.354973] W [fuse-bridge.c:4160:fuse_xattr_cbk] 0-glusterfs-fuse: 20: GETXATTR(trusted.glusterfs.quota.limit-set) / => -1 (Transport endpoint is not connected)
[2013-10-08 10:57:56.269806] I [afr-common.c:3953:afr_local_init] 0-snapstore-replicate-0: no subvolumes up
[2013-10-08 10:57:56.269846] W [fuse-bridge.c:4160:fuse_xattr_cbk] 0-glusterfs-fuse: 21: GETXATTR(trusted.glusterfs.quota.limit-set) / => -1 (Transport endpoint is not connected)

Comment 13 Kaushal 2013-10-09 03:41:54 UTC
Shanks,
This is different from the earlier case. You have a volume stopped, hence you got 'Transport endpoint is not connected'.
The earlier case with the volume running and getting 'No such file or directory' is what I want to investigate. Can you get logs for that case?

Comment 14 Gowrishankar Rajaiyan 2013-10-09 09:13:34 UTC
Ok, I see the missing piece now. 

The volume in question was first created in Big Bend and then it was upgraded to Big Bend Update 1.

Comment 15 Kaushal 2013-10-09 12:01:04 UTC
Shanks,
Can you upload the logs for this? At least the cli, glusterd, quotad, quota-mount and brick logs for the volume.

I cannot reproduce this currently. Would like to have more information to continue to investigate this.

Comment 16 Kaushal 2013-10-09 14:28:08 UTC
I got access to the system and was able to take a look at the logs.

The problem of 'list /' getting 'No such file or directory' error was because the volume was stopped. The quota auxiliary mount kept giving ENOENT for any GETXATTR request.

Later the volume was restarted. Even after that an ENOENT error occurred for a list command. The list command was given quickly after starting a the volume (~18 seconds), within which the auxiliary mount hadn't yet reconnected to the bricks (clients brick ping timer is 42 seconds). So again an ENOENT was returned by the auxiliary mount. Further quota list commands were not done until much later (hours later during which time the volume was started/restarted etc). If more list commands were done sooner (after giving enough time for client to reconnect), it is most likely that they would have succeeded.

We cannot force a client to reconnect to the bricks when we want it to, we have to wait for the ping timer to expire and retry the connection. We could reduce the problems we face by setting the 'network.ping-timeout' to a smaller value. But even then, if the command were to come in between the volume starting and the next timer expiry, we'd be faced with the same problem.

@Shanks, if the explanation was satisfactory, can you move the bug to the state you think is appropriate.

Comment 17 Gowrishankar Rajaiyan 2013-10-10 07:58:59 UTC
Thanks for the explanation Kaushal. Appreciate it.

However, I think we should address this more appropriately to mitigate "we have to wait for the ping timer to expire and retry the connection"

Comment 18 Saurabh 2013-10-10 09:22:40 UTC
As per discussion with Kaushal and Vivek, putting my observations over here,


I have four node cluster namely quota[1-4]

1. enabled quota for a volume --- issues on quota1
2. set some limit        --- issued on quota1
3. stop the volume       --- issued on quota1
4. quota list command,   --- issued on quota1

response is,
[root@quota1 ~]# gluster volume quota dist-rep list /
quota command failed : Volume dist-rep is not started.
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        Transport endpoint is not connected

5. again quota list command, --- issued on quota3
 
response is,
[root@quota3 ~]# gluster volume quota dist-rep list /
quota command failed : Volume dist-rep is not started.
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        No such file or directory


So getting different error messages from differnt nodes.

Comment 19 Vivek Agarwal 2013-10-21 06:45:53 UTC
Per bug Triage 10/21, removing this from u1 list

Comment 21 Saurabh 2013-12-18 05:26:50 UTC
as can be seen from the below mentioned logs,

[root@rhsauto012 ~]# gluster volume quota dist-rep list
quota command failed : Quota is not enabled on volume dist-rep
[root@rhsauto012 ~]# gluster volume quota dist-rep list /
quota command failed : Quota is not enabled on volume dist-rep
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dir1
quota command failed : Quota is not enabled on volume dist-rep
[root@rhsauto012 ~]# gluster volume quota dist-rep enable
volume quota : success
[root@rhsauto012 ~]# gluster volume quota dist-rep list
quota: No quota configured on volume dist-rep
[root@rhsauto012 ~]# gluster volume quota dist-rep list /
quota: No quota configured on volume dist-rep
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dir1
quota: No quota configured on volume dist-rep
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dirn
quota: No quota configured on volume dist-rep
[root@rhsauto012 ~]# 
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dir1 1GB
quota: No quota configured on volume dist-rep
[root@rhsauto012 ~]# gluster volume quota dist-rep limit-usage /dir1 1GB
volume quota : success
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dirn
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dirn                                    No such file or directory
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dir1 
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir1                                      1.0GB       80%      0Bytes   1.0GB
[root@rhsauto012 ~]# gluster volume quota dist-rep list 
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir1                                      1.0GB       80%      0Bytes   1.0GB
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dir1 
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir1                                      1.0GB       80%      0Bytes   1.0GB
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dirn
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dirn                                    No such file or directory
[root@rhsauto012 ~]# 
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dir2
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir2                                    Limit not set
[root@rhsauto012 ~]# 
[root@rhsauto012 ~]# gluster volume stop dist-rep
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: dist-rep: success
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dir2
quota command failed : Volume dist-rep is not started.
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dir1
quota command failed : Volume dist-rep is not started.
[root@rhsauto012 ~]# gluster volume quota dist-rep list /dirn
quota command failed : Volume dist-rep is not started.



moving this bz to verified, 

tests done on glusterfs-3.4.0.49rhs-1.el6rhs.x86_64

Comment 23 errata-xmlrpc 2014-02-25 07:35:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html