Bug 437181 - appearent nfs ext3 quota limit
Summary: appearent nfs ext3 quota limit
Keywords:
Status: CLOSED DUPLICATE of bug 594609
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel
Version: 5.1
Hardware: i686
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: fs-maint
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-03-12 19:04 UTC by Garrett
Modified: 2013-02-25 17:01 UTC (History)
4 users (show)

Fixed In Version: kernel-2.6.18-253.el5
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-25 17:01:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Garrett 2008-03-12 19:04:47 UTC
Description of problem: 
I'm trying to set up group quota limits on a 12TB SATA RAID I've formatted using
lvm2 and ext3. Everything works fine except for the largest group. They are
already using 4.5TB of space.
   When I try to set the quota for this group using edquota, I can save the
number, but when I look at the quota I just set it shows me a much smaller
number. For instance, here's what I try to set (real example):

  Filesystem                   blocks       soft           
  /dev/mapper/jetstor0-data1 4182889320   5000000000

 I then save it successfully. But when I go back to edquota for this group, I
see this:

  Filesystem                   blocks       soft       
  /dev/mapper/jetstor0-data1 4182997324  705032704 

  The numbers seem to vary, but they are always smaller.

Version-Release number of selected component (if applicable):
RHEL 5.1

How reproducible:
All other group and user quotas I set on this system work fine.
  The quota problem appears to start showing up above 3TB.
The highest group quota I've been able to use is 3.5TB. Above that level I start
getting back smaller, random numbers. 
   For the moment I set the inode number instead, which is a less effective
quota method in this case.

Steps to Reproduce:
1. edquota -g <group-name>
2. 
3.
  
Actual results:
See above

Expected results:
See above

Additional info:
I'm using RAID 6

Comment 1 Garrett 2008-03-12 19:06:21 UTC
I'm using 32-bit RHEL 5.1

Comment 2 Ondrej Vasik 2008-03-13 11:01:21 UTC
Thanks for report, 
while it is Red Hat Enterprise Linux 5 issue, please contact product support -
they may have more time to analyze the problem and to give priority to that
issue and track it internally. Please tell them that there is already existing
bugzilla about that issue.

For me it could take some time till I would be able to try it here (to get 4TB+
machine access) therefore I have few quick questions... 
1) Is it possible to check the issue with latest Fedora quota? (in RHEL-5 there
is 3.12, in F-8 is 3.14 and in Rawhide 3.15) 
2) Is the varying soft limit really applying? I mean is it used for sending
warning about exceeded limit? Or it is just issue with wrong displayed number... ?

Comment 3 Garrett 2008-03-13 17:31:36 UTC
#1) Sorry, it is a production machine so I can't change it over to a Fedora system.

#2) I can't say for certain. It is certainly sending a warning about the
exceeded limit, but because it is a production machine I can't wait to see if
the wrong quotas take effect.

Comment 4 Ondrej Vasik 2008-03-14 08:43:40 UTC
Ok, I understand... 

So please contact official support (
https://www.redhat.com/support/process/production/ ) - they may have more time
and resources to find more about that issue. Please inform them about existing
bugzilla about that issue. Thanks in advance.

Comment 5 Petr Pisar 2010-05-07 09:34:05 UTC
I tried to create testbed on sparse files. I managed to get 12TB ext3 file system on top of 2GB sparse file. However after creating 4 sparse 1TB-files inside the file system, I figured out quota counted used blocks, i.e. files without holes. Therefore I'm not able (without real 12TB device) to get 4.5 TB workload. Maybe some type of compressing block device could manage it.

While setting soft quota to big numbers (on 64b Fedora 12) I found that edquota refuses to set greater number than 2^32 - 1. That equals to 4 TiB:

edquota: Cannot set quota for group 500 from kernel on /dev/loop0: Numerical result out of range
edquota: Cannot write quota for 500 on /dev/loop0: Numerical result out of range

I will try to check it in RHEL-5. Seems like Fedora12 kernel does check on 32b overflow.

Comment 6 Petr Pisar 2010-05-07 12:15:16 UTC
Ok. I can reproduce the 2^32 limit for soft group quota in RHEL-5.5 on x86_64. The difference to Fedora 12 is kernel does not refuse such big number and the number overflows somewhere:

$ quota -g
Disk quotas for group petr (gid 500):
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
     /dev/loop0      12  4000000000       0               1       0       0      

# edquota -g petr /mnt/12tb/
Disk quotas for group petr (gid 500):
  Filesystem                   blocks       soft       hard     inodes     soft 
    hard
  /dev/loop0                       12 5000000000          0          1        0

$ quota -g
Disk quotas for group petr (gid 500):
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
     /dev/loop0      12  705032704       0               1       0       0

Comment 7 Petr Pisar 2010-05-07 13:50:17 UTC
RHEL-5.5:

This is fine:

# strace -equotactl setquota -g petr $((2**32-1)) 0 0 0 /mnt/12tb/
quotactl(Q_GETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=4294967295, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0
quotactl(Q_SETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=4294967295, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0

And this one fails:

# strace -equotactl setquota -g petr $((2**32)) 0 0 0 /mnt/12tb/
quotactl(Q_GETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=4294967295, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0
quotactl(Q_SETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=4294967296, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0

# repquota -g /mnt/12tb/
*** Report for group quotas on device /dev/loop0
Block grace time: 7days; Inode grace time: 7days
                        Block limits                File limits
Group           used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
quotactl(Q_SYNC|GRPQUOTA, "/dev/loop0", 0, NULL) = 0
root      --  159168       0       0              4     0     0       
petr      --      12       0       0              1     0     0

Here you can see kernel reports 0:

# strace -equotactl setquota -g petr $((2**32)) 0 0 0 /mnt/12tb/
quotactl(Q_GETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=0, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0

This bug should be reassigned to kernel team.

# uname -a
Linux rhel-5_5 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

Comment 8 Eric Sandeen 2013-02-25 17:01:14 UTC
Fixed in latest RHEL5, see fixed-in-version and bug #594609

-Eric

*** This bug has been marked as a duplicate of bug 594609 ***


Note You need to log in before you can comment on or make changes to this bug.