Bug 1021466 - quota: directory limit cross, while creating data in subdirs
quota: directory limit cross, while creating data in subdirs
Status: CLOSED WORKSFORME
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: quota (Show other bugs)
2.1
x86_64 Linux
high Severity urgent
: ---
: ---
Assigned To: Vijaikumar Mallikarjuna
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks: 1020127 1026291
  Show dependency treegraph
 
Reported: 2013-10-21 07:06 EDT by Saurabh
Modified: 2016-09-17 08:41 EDT (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
After setting Quota limit on a directory, creating sub directories and populating them with files and renaming the files subsequently while the I/O operation is in progress causes a quota limit violation.
Story Points: ---
Clone Of:
: 1026291 (view as bug list)
Environment:
Last Closed: 2015-10-08 03:35:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-10-21 07:06:56 EDT
Description of problem:

It's a scenario where rename of a directory tested.
Here I try to put a limit on a directory and create few more subdir in that directory.


Now, in the leaf subdir I try to create files till quota limit is reached.

While the I/O is going on I rename the directory on which the limit is set.
The limit is crossed by 180%.

[root@quota1 ~]# gluster volume info dist-rep
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: 1e06795e-7032-479d-9d48-026b832cede3
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r1
Brick2: 10.70.43.181:/rhs/brick1/d1r2
Brick3: 10.70.43.18:/rhs/brick1/d2r1
Brick4: 10.70.43.22:/rhs/brick1/d2r2
Brick5: 10.70.42.186:/rhs/brick1/d3r1
Brick6: 10.70.43.181:/rhs/brick1/d3r2
Brick7: 10.70.43.18:/rhs/brick1/d4r1
Brick8: 10.70.43.22:/rhs/brick1/d4r2
Brick9: 10.70.42.186:/rhs/brick1/d5r1
Brick10: 10.70.43.181:/rhs/brick1/d5r2
Brick11: 10.70.43.18:/rhs/brick1/d6r1
Brick12: 10.70.43.22:/rhs/brick1/d6r2
Options Reconfigured:
server.root-squash: off
nfs.addr-namelookup: on
features.quota-deem-statfs: on
features.quota: on


Version-Release number of selected component (if applicable):
glusterfs-3.4.0.35rhs

How reproducible:
already happened twice

Steps to Reproduce:
1. create a volume, enable quota, set some large limit on the root of the volume.
2. mount over nfs

3. create a directory, and set quota limit of top of it, let say dir name as newdir.


4. create one more directory and set quota limit on top of it, lets say dir name is d3

5. mkdir -p <mount-point>/newdir/dir/dir1/dir2/dir3/dir4

6. cd <mount-point>/newdir/d3/dir1/dir2/dir3/dir4

7. use this script to create data, 
   for i in `seq 1 400`; do dd if=/dev/input_file of=f.$i bs=128K count=$i; echo $i; done

8. while I/O is going on, 
   on mount point
   rename d3 to d4, "mv d3 d4"

Actual results:

after rename the new name is "d4" and the limit is getting crossed,
[root@quota1 ~]# gluster volume quota dist-rep list /newdir/d4
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/newdir/d4                                 1.0GB       80%       2.8GB  0Bytes

after that EDQUOT is seen


Expected results:
limit should not cross in case of rename

Additional info:

[root@quota1 ~]# gluster volume quota dist-rep list /newdir
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/newdir                                   10.0GB       80%       7.3GB   2.7GB
[root@quota1 ~]# gluster volume quota dist-rep list /
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0TB       80%     271.6GB 752.4GB
Comment 1 Saurabh 2013-10-21 07:14:34 EDT
On a cluster of four nodes, 

namely, quota[1-4]

xattrs for directory in consideration, 

from node quota1

[root@quota1 ~]# getfattr -m . -d -e hex /rhs/brick1/d*r1/newdir/d4
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/d1r1/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x00000000198d0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x00000000198d0000

# file: rhs/brick1/d3r1/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x000000001f3c0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001f3c0000

# file: rhs/brick1/d5r1/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x00000001000000002aaaaaaa55555553
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x000000001d640000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001d640000



from node quota2,

[root@quota2 ~]# getfattr -m . -d -e hex /rhs/brick1/d*r2/newdir/d4
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/d1r2/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x00000000198d0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x00000000198d0000

# file: rhs/brick1/d3r2/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x000000001f3c0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001f3c0000

# file: rhs/brick1/d5r2/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x00000001000000002aaaaaaa55555553
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x000000001d640000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001d640000


from node quota3,

[root@quota3 ~]# getfattr -m . -d -e hex /rhs/brick1/d*r1/newdir/d4
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/d2r1/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x0000000100000000aaaaaaa8d5555551
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x000000001b680000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001b680000

# file: rhs/brick1/d4r1/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x00000000278c0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x00000000278c0000

# file: rhs/brick1/d6r1/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x0000000100000000555555547ffffffd
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x000000001b1a0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001b1a0000

from node quota4,

[root@quota4 ~]# getfattr -m . -d -e hex /rhs/brick1/d*r2/newdir/d4
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/d2r2/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x0000000100000000aaaaaaa8d5555551
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x000000001b680000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001b680000

# file: rhs/brick1/d4r2/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x00000000278c0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x00000000278c0000

# file: rhs/brick1/d6r2/newdir/d4
trusted.gfid=0x1866f484cc814824a5aa9b1808998ff1
trusted.glusterfs.dht=0x0000000100000000555555547ffffffd
trusted.glusterfs.quota.3bc2ab97-9a5b-45d7-b541-c265380d898b.contri=0x000000001b1a0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001b1a0000
Comment 3 Saurabh 2013-10-21 07:27:46 EDT
after this test, I removed all the files, 

Tried to create and no renaming this time,

Still the limit is getting crossed,

[root@quota1 ~]# gluster volume quota dist-rep list /newdir/d4
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/newdir/d4                                 1.0GB       80%       2.9GB  0Bytes


[root@rhsauto001 newdir]# cd d4
[root@rhsauto001 d4]# cd dir1/dir2/dir3/dir4
[root@rhsauto001 dir4]# du -sh .
2.9G    .
[root@rhsauto001 dir4]# pwd
/mnt/nfs-test/newdir/d4/dir1/dir2/dir3/dir4
[root@rhsauto001 dir4]# mount | grep nfs-test
10.70.42.186:/dist-rep on /mnt/nfs-test type nfs (rw,addr=10.70.42.186)
[root@rhsauto001 dir4]#
Comment 4 Saurabh 2013-10-21 07:30:38 EDT
Again going further and tried to test with a new directory "d5" inside the existing parent "newdir"

still getting this issue and no rename done meanwhile test is going on,

[root@quota1 ~]# gluster volume quota dist-rep list /newdir/d5
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/newdir/d5                                 1.0GB       80%       2.8GB  0Bytes


[root@rhsauto001 dir4]# pwd
/mnt/nfs-test/newdir/d5/dir1/dir2/dir3/dir4
[root@rhsauto001 dir4]# du -sh .
2.9G    .
[root@rhsauto001 dir4]#
Comment 5 Saurabh 2013-10-21 07:56:01 EDT
Again to check, I exected the test on a new volume and different set of nodes also, 

similar fashion and no rename involved,

issue happens,

[root@nfs1 ~]# gluster volume quota dist-rep3 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0TB       80%       3.2GB 1020.8GB
/newdir                                   10.0GB       80%       3.2GB   6.8GB
/newdir/d4                                 1.0GB       80%       3.2GB  0Bytes


Volume Name: dist-rep3
Type: Distributed-Replicate
Volume ID: 81c83633-0a08-4c27-9ab4-e9aeb1539284
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.213:/rhs/bricks/d1r13
Brick2: 10.70.37.145:/rhs/bricks/d1r23
Brick3: 10.70.37.163:/rhs/bricks/d2r13
Brick4: 10.70.37.76:/rhs/bricks/d2r23
Brick5: 10.70.37.213:/rhs/bricks/d3r13
Brick6: 10.70.37.145:/rhs/bricks/d3r23
Brick7: 10.70.37.163:/rhs/bricks/d4r13
Brick8: 10.70.37.76:/rhs/bricks/d4r23
Brick9: 10.70.37.213:/rhs/bricks/d5r13
Brick10: 10.70.37.145:/rhs/bricks/d5r23
Brick11: 10.70.37.163:/rhs/bricks/d6r13
Brick12: 10.70.37.76:/rhs/bricks/d6r23
Options Reconfigured:
features.quota-deem-statfs: on
nfs.addr-namelookup: on
features.quota: on



four node cluster, hostnames nfs[1-4]

xattrs from nfs1,

[root@nfs1 ~]# getfattr -m . -d -e hex /rhs/bricks/d*r13/newdir/d4
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/d1r13/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x000000001ee40000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001ee40000

# file: rhs/bricks/d3r13/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x000000002e760000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000002e760000

# file: rhs/bricks/d5r13/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x00000001000000002aaaaaaa55555553
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x000000001cd00000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001cd00000

[root@nfs1 ~]# 


xattrs from nfs2,

[root@nfs2 ~]# getfattr -m . -d -e hex /rhs/bricks/d*r23/newdir/d4
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/d1r23/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x000000001ee40000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001ee40000

# file: rhs/bricks/d3r23/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x000000002e760000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000002e760000

# file: rhs/bricks/d5r23/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x00000001000000002aaaaaaa55555553
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x000000001cd00000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001cd00000


xattrs from nfs3,
[root@nfs3 ~]# getfattr -m . -d -e hex /rhs/bricks/d*r13/newdir/d4
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/d2r13/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x0000000100000000aaaaaaa8d5555551
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x0000000023470000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000023470000

# file: rhs/bricks/d4r13/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x0000000020d00000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000020d00000

# file: rhs/bricks/d6r13/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x0000000100000000555555547ffffffd
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x000000001daf0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001daf0000


xattrs from dir4,
[root@nfs4 ~]# getfattr -m . -d -e hex /rhs/bricks/d*r23/newdir/d4
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/d2r23/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x0000000100000000aaaaaaa8d5555551
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x0000000023470000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000023470000

# file: rhs/bricks/d4r23/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x0000000020d00000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000020d00000

# file: rhs/bricks/d6r23/newdir/d4
trusted.gfid=0x564860906ef14367b642a5f810793510
trusted.glusterfs.dht=0x0000000100000000555555547ffffffd
trusted.glusterfs.quota.23728d9f-7205-4509-968d-b90d296f4b4f.contri=0x000000001daf0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000001daf0000
Comment 6 Saurabh 2013-10-22 07:27:19 EDT
Tried out a bit more 

I am able to cross quota limit by 400%,
[root@quota1 ~]# gluster volume quota dist-rep2 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%       4.3GB  0Bytes


This time the file created in root of the volume only,

and the script used to create is similar in comment#0

But, if I use the below mentioned "dd" command,
time  dd if=/dev/urandom of=f.n bs=102400 count=1024 in a loop, I do not cross the quota limits.

as can be seen,

[root@quota1 ~]# gluster volume quota dist-rep2 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%       1.0GB  0Bytes


So, this means just by changing the dd params the quota limits are screwed, and we can't keep experimenting all the dd params along with our quota implementation
Comment 7 Saurabh 2013-10-24 07:43:48 EDT
issue can still be seen in glusterfs.3.4.0.36rhs

[root@quota1 ~]# gluster volume quota dist-rep3 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         40.0GB       80%      11.0GB  29.0GB
/qa1                                      10.0GB       80%       8.0GB   2.0GB
/qa1/dir1-data                             1.0GB       80%       1.0GB  0Bytes
/qa1/dir2-data                             1.0GB       80%       1.5GB  0Bytes
/qa1/dir3-data                             1.0GB       80%       1.5GB  0Bytes
Comment 8 Vivek Agarwal 2013-10-28 05:27:51 EDT
Can you do the same for a size 100GB at the least?
Comment 9 Saurabh 2013-11-06 06:05:59 EST
(In reply to Vivek Agarwal from comment #8)
> Can you do the same for a size 100GB at the least?

Yes I have done that, for directories.
I have created upto 500GB and "Disk quota exceeded" has occurred.
But, if you check the
BZ https://bugzilla.redhat.com/show_bug.cgi?id=1024355#c3

The BZ is about addition of quota space used is not done properly.
As per this comment I am already telling that we are crossing the limit for "root" of the volume.

That puts me in a confusion and makes me think that why we are not aggregating properly and allowing quota limit for root be crossed.
Comment 10 Saurabh 2013-11-06 06:53:20 EST
Please ignore #c9, as BZ 1024355 has turned out be a NOTABUG.

as reply for #c8
Present information is that I have tried out setting limits upto 512GB for a directory and creating data in inside it.
The data included creating files sequentially, which has seen "Disk quota exceeded" appropriately
Comment 11 Raghavendra G 2013-11-06 07:45:33 EST
The issue is inconsistently reproducible. However with nfs write caching turned off, we never saw this issue even once. So, we suspect that this issue is caused by parallel writes. The following patch make accounting more tight by accounting in-progress (parallel) writes too.
https://code.engineering.redhat.com/gerrit/#/c/15126/
Comment 12 Vivek Agarwal 2013-11-14 06:27:46 EST
Moving the known issues to Doc team, to be documented in release notes for U1
Comment 13 Vivek Agarwal 2013-11-14 06:29:35 EST
Moving the known issues to Doc team, to be documented in release notes for U1
Comment 14 Vivek Agarwal 2013-11-14 06:30:07 EST
Moving the known issues to Doc team, to be documented in release notes for U1
Comment 16 Pavithra 2013-11-25 02:21:42 EST
I've documented this as a known issue in the Big Bend Update 1 Release Notes. Here is the link:

http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1/html/2.1_Update_1_Release_Notes/chap-Documentation-2.1_Update_1_Release_Notes-Known_Issues.html
Comment 18 Manikandan 2015-10-08 03:35:49 EDT
I tried reproducing it in 3.7.4

gluster v quota vol limit-usage /newdir/d4 512MB

1) gluster v quota vol list /newdir/d4             
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/newdir/d4                               512.0MB     80%(409.6MB)   0Bytes 512.0MB              No                   No

2) Issued in /newdir/d4 -> for i in `seq 1 200`; do dd if=/dev/zero of=f.$i bs=64K count=$i; echo $i; done

3) Meanwhile tried renaming the directory while the IO is happening,

mv d4 d5

4) gluster v quota vol list /newdir/d5
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/newdir/d5                               512.0MB     80%(409.6MB)  514.9MB  0Bytes             Yes                  Yes

5) While performing the IO also, it shows "Disk quota exceeded"

128
dd: failed to open ‘f.129’: Disk quota exceeded
129
dd: failed to open ‘f.130’: Disk quota exceeded
130
dd: failed to open ‘f.131’: Disk quota exceeded
131
dd: failed to open ‘f.132’: Disk quota exceeded
132
dd: failed to open ‘f.133’: Disk quota exceeded
133
dd: failed to open ‘f.134’: Disk quota exceeded
134
dd: failed to open ‘f.135’: Disk quota exceeded

Note You need to log in before you can comment on or make changes to this bug.