Bug 974025 - quota: limits gets crossed by 25%
Summary: quota: limits gets crossed by 25%
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: vpshastry
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-13 10:08 UTC by Saurabh
Modified: 2016-01-19 06:12 UTC (History)
8 users (show)

Fixed In Version: v3.4.0.12rhs.beta2
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-09-23 22:39:50 UTC
Embargoed:


Attachments (Terms of Use)

Description Saurabh 2013-06-13 10:08:54 UTC
Description of problem:
The volume created is 6x2,
[root@nfs2 ~]# gluster volume info
 
Volume Name: dit-rep
Type: Distributed-Replicate
Volume ID: 9f0499fd-39c1-4c94-8d47-b767d6cccf86
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.180:/rhs/bricks1/d1r1
Brick2: 10.70.37.80:/rhs/bricks1/d1r2
Brick3: 10.70.37.216:/rhs/bricks1/d2r1
Brick4: 10.70.37.139:/rhs/bricks1/d2r2
Brick5: 10.70.37.180:/rhs/bricks1/d3r1
Brick6: 10.70.37.80:/rhs/bricks1/d3r2
Brick7: 10.70.37.216:/rhs/bricks1/d4r1
Brick8: 10.70.37.139:/rhs/bricks1/d4r2
Brick9: 10.70.37.180:/rhs/bricks1/d5r1
Brick10: 10.70.37.80:/rhs/bricks1/d5r2
Brick11: 10.70.37.216:/rhs/bricks1/d6r1
Brick12: 10.70.37.139:/rhs/bricks1/d6r2
Options Reconfigured:
features.limit-usage: /:2GB:90%
features.quota: on


I recieve "Disk quota exceeded" on stdout, but actually after a while if I try again I can create new file.

Version-Release number of selected component (if applicable):
[root@nfs1 rpms]# rpm -qa | grep glusterfs
glusterfs-3.4rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4rhs-1.el6rhs.x86_64
glusterfs-server-3.4rhs-1.el6rhs.x86_64
[root@nfs1 rpms]# 


How reproducible:
always

Steps to Reproduce:
1. create a volume of 6x2 volume, start it
2. enable quota
3. gluster volume quota <vol-name> limit-usage / 2GB
4. mount -t glusterfs server-ip:<vol-name> <mount-point>
5. execute in a for loop,
   dd if=/dev/urandom of=f.$i bs=1024 count=1024
   till "Disk quota exceeded" is thrown on the stdout
6. wait for some time, lets say 30sec.
7. give another of creating files in for loop.
8. repeat similar steps[5-6-7] quite 2-3 times.

Actual results:
step 7  and 8 creates more files.

I have tried to create files of 1MB in size
as per the limit set of 2GB, there should be around 2048 files in total,
but I have been able to create,
[root@rhel6 glusterfs-test]# ls | wc -l
2309
[root@rhel6 glusterfs-test]# 

[root@rhel6 glusterfs-test]# du -sh
2.3G    .


In fact I am able to beyond.


from server end,

[root@nfs2 ~]# gluster volume quota dit-rep list
	Path		 Hard-limit	 Soft-limit	 Used	 Available
----------------------------------------------------------------------------------
/                           2GB        90%              415.0MB                1.6GB


Expected results:
the number of files created after limit is crossed is large.
The quota enforcement should be proper.

Additional info:

Comment 8 vpshastry 2013-07-19 11:20:15 UTC
The reason being for this is that the size in the quota context was being updated in the lookup fop and removed it in the patch mentioned in the comment #5.

Comment 10 Scott Haines 2013-09-23 22:39:50 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Comment 11 Scott Haines 2013-09-23 22:43:48 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.