Bug 1324059

Summary: quota: check inode limits only when new file/dir is created and not with write FOP
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vijaikumar Mallikarjuna <vmallika>
Component: quotaAssignee: Raghavendra G <rgowdapp>
Status: CLOSED EOL QA Contact: Rahul Hinduja <rhinduja>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, bugs, hgowtham, rcyriac, rhinduja, rhs-bugs, smohan
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1324058 Environment:
Last Closed: 2018-10-23 07:34:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1323486, 1324058    
Bug Blocks:    

Description Vijaikumar Mallikarjuna 2016-04-05 12:35:40 UTC
+++ This bug was initially created as a clone of Bug #1324058 +++

+++ This bug was initially created as a clone of Bug #1323486 +++

Below test-case fails with disk quota exceed even though there is space available

1) create volume
2) # gluster volume quota vol1 limit-objects /test_dir 10
3) exceed inode limit. Create 9 files in loop ( test_dir alreday accounted as 1 inode used, we should be able to create another 9 files)

   for i in {1..9}; do
      touch /mnt/test_dir/f$i
   done

Now if inode limit is full, but we have not set any usage limit. so any write operation on the existing files should work, but fails with disk quota exceeded.

dd if=/dev/zero of=/mnt/test_dir/f1 bs=256k count=4 oflag=sync
write failed: Disk quota exceeded

--- Additional comment from Vijay Bellur on 2016-04-05 08:31:45 EDT ---

REVIEW: http://review.gluster.org/13911 (quota: check inode limits only when new file/dir is created) posted (#1) for review on master by Vijaikumar Mallikarjuna (vmallika)

--- Additional comment from Vijay Bellur on 2016-04-05 08:34:14 EDT ---

REVIEW: http://review.gluster.org/13912 (quota: check inode limits only when new file/dir is created) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 2 Vijaikumar Mallikarjuna 2016-04-05 12:37:10 UTC
Patch submitted upstream:
mainline: http://review.gluster.org/#/c/13911/
release-3.7: http://review.gluster.org/#/c/13912/

Comment 6 Atin Mukherjee 2016-09-17 14:58:09 UTC
Upstream mainline : http://review.gluster.org/13911
Upstream 3.8 : Available as part of branching from mainline

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.

Comment 10 Anil Shah 2016-10-19 12:31:54 UTC
1) create volume
2) # gluster volume quota vol1 limit-objects /test_dir 10
3) exceeded  inode limit. Create 9 files in loop 

   

On client:
=======================================

[root@rhsqa7 test_dir]# for i in {1..9}; do touch /mnt/fuse/test_dir/f$i;   done

[root@rhsqa7 test_dir]# for i in {1..9}; do dd if=/dev/zero of=/mnt/fuse/test_dir/f$i bs=256k count=4 ;done4+0 records in
4+0 records out
1048576 bytes (1.0 MB) copied, 0.0149057 s, 70.3 MB/s
4+0 records in
4+0 records out
1048576 bytes (1.0 MB) copied, 0.0130044 s, 80.6 MB/s
4+0 records in
4+0 records out
1048576 bytes (1.0 MB) copied, 0.0127165 s, 82.5 MB/s
4+0 records in
4+0 records out
1048576 bytes (1.0 MB) copied, 0.0220667 s, 47.5 MB/s
4+0 records in
4+0 records out
1048576 bytes (1.0 MB) copied, 0.0129507 s, 81.0 MB/s
4+0 records in
4+0 records out
1048576 bytes (1.0 MB) copied, 0.0216383 s, 48.5 MB/s
4+0 records in
4+0 records out
1048576 bytes (1.0 MB) copied, 0.0216434 s, 48.4 MB/s
4+0 records in
4+0 records out
1048576 bytes (1.0 MB) copied, 0.0217204 s, 48.3 MB/s
4+0 records in
4+0 records out
1048576 bytes (1.0 MB) copied, 0.0213431 s, 49.1 MB/s

Bug verified on build glusterfs-3.8.4-2.el7rhgs.x86_64

Comment 18 hari gowtham 2018-10-23 07:34:59 UTC
Hi,

Even though the bug has been verified, I'm closing this, as quota is not being actively supported.

Regards,
Hari.