Bug 987418 - quota: design flaw
quota: design flaw
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
x86_64 All
high Severity high
: ---
: ---
Assigned To: krishnan parthasarathi
: ZStream
Depends On:
  Show dependency treegraph
Reported: 2013-07-23 07:07 EDT by Saurabh
Modified: 2016-01-19 01:15 EST (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-10-07 01:56:05 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-07-23 07:07:59 EDT
Description of problem:
quotad process runs only on one node of the cluster, this is a major issue.
as if the node having quotad running goes down or quotad gets killed
we will have no one to take care about quota policy.

Version-Release number of selected component (if applicable):
[root@nfs1 ~]# rpm -qa | grep glusterfs

How reproducible:

Steps to Reproduce:
setup 4 rhs nodes[1, 2, 3, 4], 1 client
1. create a volume, start it
2. enable quota on root of the volume on rhs-node1
3. set limit of 1GB 
4. mount over nfs from node3
5. start creating data. create 1MB file tille quota limit gets exceeded.
6. kill quotad on node1.

let I/O keep happening.

Actual results:
the quota limit is surpassed 

[root@rhsauto030 nfs-test]# du -h .
2.8G	.
[root@rhsauto030 nfs-test]# pwd 
[root@rhsauto030 nfs-test]# mount | grep nfs-test on /mnt/nfs-test type nfs (rw,addr=
[root@rhsauto030 nfs-test]# 

[root@nfs1 ~]# ps -eaf | grep quotad
root      6709  2366  0 05:43 pts/0    00:00:00 grep quotad
[root@nfs1 ~]# 

Expected results:
we should have mechanism to make it highly available or distributed.

Additional info:
Comment 2 vpshastry 2013-09-12 05:57:49 EDT
As per the new design quotad runs on all the nodes. Can you please verify and move it to the appropriate state?

Note You need to log in before you can comment on or make changes to this bug.