Bug 1002530 - quota: directory quota limit crossed, if limit is changed while I/O is happening
Summary: quota: directory quota limit crossed, if limit is changed while I/O is happening
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
low
high
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks: 1016683
TreeView+ depends on / blocked
 
Reported: 2013-08-29 11:59 UTC by Saurabh
Modified: 2016-01-19 06:12 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.4.0.36rhs-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1016683 (view as bug list)
Environment:
Last Closed: 2013-11-27 15:35:33 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1769 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #1 2013-11-27 20:17:39 UTC

Description Saurabh 2013-08-29 11:59:20 UTC
Description of problem:
quota limit is getting crossed if I am changing the already set limit to some value, at this time time I/O was happening over nfs mount

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-libs-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.20rhs-2.el6rhs.x86_64
glusterfs-api-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-server-3.4.0.20rhsquota5-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.20rhsquota5-1.el6rhs.x86_64


How reproducible:
happened for this time.

Steps to Reproduce:

setup of four nodes node[1,2,3,4]

1. create a volume 6x2, start it
2. enable quota on the volume 
3. set some limit on the volume
4. mount the volume over nfs
5. create a directory
6. set a limit of 20GB on this directory
7. create one more directory inside the directory created in step 5.
8. start creating data inside the second dir, using this script,

for i in {1..100}; do sudo dd if=/dev/urandom of=./file${i} bs=1024k count=100; done

8a. kill the quotad on node1 and node3.

9. change the limit to 3GB after sometime for dir created in step5.

Actual results:

[root@nfs2 ~]# gluster volume quota dr3 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         40.0GB 1244549331936494413       4.8GB  35.2GB
/linux-untar-dir                           3.0GB 1244549331936494413       4.9GB  0Bytes


the hard-limit set is 3GB and Used is 4.9 GB

mount to the client was done from node1.

Expected results:
directory is not suppose to cross in this scenario also.

Additional info:

more logs,

[root@nfs2 ~]# gluster volume quota dr3 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         40.0GB 1244549331936494413       4.2GB  35.8GB
/linux-untar-dir                          20.0GB 1244549331936494413       2.7GB  17.3GB
[root@nfs2 ~]# 
[root@nfs2 ~]# 
[root@nfs2 ~]# 
[root@nfs2 ~]# #gluster volume quota dr3 limit-usage /linux-untar-dir 3GB
[root@nfs2 ~]# gluster volume quota dr3 limit-usage /linux-untar-dir 3GB
volume quota : success
[root@nfs2 ~]# gluster volume quota dr3 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         40.0GB 1244549331936494413       4.2GB  35.8GB
/linux-untar-dir                           3.0GB 1244549331936494413       2.7GB 338.3MB

after sometime
[root@nfs2 ~]# gluster volume quota dr3 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         40.0GB 1244549331936494413       4.8GB  35.2GB
/linux-untar-dir                           3.0GB 1244549331936494413       4.9GB  0Bytes

Comment 3 vpshastry 2013-09-02 06:37:05 UTC
Quotad is the aggregator of the sizes from all the brick. As you mentioned in the step 8 you are killing the quotads making the bricks ignorant of the usage across the volume, so by exceeding the size beyond the limit. This is expected when the quotad down. Improvements to quotad availability solves the problem.

Workaround: whenever the quotad observed to be not running, use gluster volume start <volname> force to bring the quotads (and all other processes) back.

Comment 7 Pranith Kumar K 2013-10-18 07:48:52 UTC
Tested the test-case specified in the bug description with the fix for the bug 1001556 and the issue is not seen any more.

Swetha found one more similar issue in the bug 998914 which I think would also be fixed with this issue. Will test that one as well and move this bug to MODIFIED.

root@pranithk-vm3 - /mnt/c/d 
11:52:01 :) ⚡ for i in {1..100}; do sudo dd if=/dev/urandom of=./file${i} bs=1024k count=100; done
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 14.5892 s, 7.2 MB/s
100+0 records in
100+0 records out
..... some more successful file creations
104857600 bytes (105 MB) copied, 16.8631 s, 6.2 MB/s
dd: opening `./file39': Input/output error
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 15.6028 s, 6.7 MB/s
dd: writing `./file41': Input/output error
85+0 records in
84+0 records out
88080384 bytes (88 MB) copied, 11.9195 s, 7.4 MB/s
dd: writing `./file42': Input/output error
21+0 records in
20+0 records out
20971520 bytes (21 MB) copied, 3.64219 s, 5.8 MB/s
dd: opening `./file43': Disk quota exceeded
dd: opening `./file44': Disk quota exceeded
dd: opening `./file45': Input/output error
dd: opening `./file46': Input/output error


root@pranithk-vm3 - /mnt/c/d 
12:01:03 :( ⚡ gluster volume quota r2 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/c                                         3.0GB       80%       3.0GB  0Bytes
/                                         40.0GB       80%       3.0GB  37.0GB

root@pranithk-vm3 - /mnt/c/d 
12:48:53 :) ⚡ gluster volume info
 
Volume Name: r2
Type: Distributed-Replicate
Volume ID: 514cb4c7-01c8-472e-a284-2fb645c1b35d
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.43.198:/brick/r2_0
Brick2: 10.70.42.237:/brick/r2_1
Brick3: 10.70.43.148:/brick/r2_2
Brick4: 10.70.43.198:/brick/r2_3
Brick5: 10.70.42.237:/brick/r2_4
Brick6: 10.70.43.148:/brick/r2_5
Brick7: 10.70.43.198:/brick/r2_6
Brick8: 10.70.42.237:/brick/r2_7
Brick9: 10.70.43.148:/brick/r2_8
Brick10: 10.70.43.198:/brick/r2_9
Brick11: 10.70.42.237:/brick/r2_10
Brick12: 10.70.43.148:/brick/r2_11
Options Reconfigured:
features.quota: on

Pranith

Comment 9 errata-xmlrpc 2013-11-27 15:35:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html


Note You need to log in before you can comment on or make changes to this bug.