Bug 437181
Summary: | appearent nfs ext3 quota limit | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Garrett <gjohnsit> |
Component: | kernel | Assignee: | fs-maint |
Status: | CLOSED DUPLICATE | QA Contact: | Red Hat Kernel QE team <kernel-qe> |
Severity: | medium | Docs Contact: | |
Priority: | low | ||
Version: | 5.1 | CC: | cwalker, ovasik, pasteur, ppisar |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | i686 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | kernel-2.6.18-253.el5 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-02-25 17:01:14 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Garrett
2008-03-12 19:04:47 UTC
I'm using 32-bit RHEL 5.1 Thanks for report, while it is Red Hat Enterprise Linux 5 issue, please contact product support - they may have more time to analyze the problem and to give priority to that issue and track it internally. Please tell them that there is already existing bugzilla about that issue. For me it could take some time till I would be able to try it here (to get 4TB+ machine access) therefore I have few quick questions... 1) Is it possible to check the issue with latest Fedora quota? (in RHEL-5 there is 3.12, in F-8 is 3.14 and in Rawhide 3.15) 2) Is the varying soft limit really applying? I mean is it used for sending warning about exceeded limit? Or it is just issue with wrong displayed number... ? #1) Sorry, it is a production machine so I can't change it over to a Fedora system. #2) I can't say for certain. It is certainly sending a warning about the exceeded limit, but because it is a production machine I can't wait to see if the wrong quotas take effect. Ok, I understand... So please contact official support ( https://www.redhat.com/support/process/production/ ) - they may have more time and resources to find more about that issue. Please inform them about existing bugzilla about that issue. Thanks in advance. I tried to create testbed on sparse files. I managed to get 12TB ext3 file system on top of 2GB sparse file. However after creating 4 sparse 1TB-files inside the file system, I figured out quota counted used blocks, i.e. files without holes. Therefore I'm not able (without real 12TB device) to get 4.5 TB workload. Maybe some type of compressing block device could manage it. While setting soft quota to big numbers (on 64b Fedora 12) I found that edquota refuses to set greater number than 2^32 - 1. That equals to 4 TiB: edquota: Cannot set quota for group 500 from kernel on /dev/loop0: Numerical result out of range edquota: Cannot write quota for 500 on /dev/loop0: Numerical result out of range I will try to check it in RHEL-5. Seems like Fedora12 kernel does check on 32b overflow. Ok. I can reproduce the 2^32 limit for soft group quota in RHEL-5.5 on x86_64. The difference to Fedora 12 is kernel does not refuse such big number and the number overflows somewhere: $ quota -g Disk quotas for group petr (gid 500): Filesystem blocks quota limit grace files quota limit grace /dev/loop0 12 4000000000 0 1 0 0 # edquota -g petr /mnt/12tb/ Disk quotas for group petr (gid 500): Filesystem blocks soft hard inodes soft hard /dev/loop0 12 5000000000 0 1 0 $ quota -g Disk quotas for group petr (gid 500): Filesystem blocks quota limit grace files quota limit grace /dev/loop0 12 705032704 0 1 0 0 RHEL-5.5: This is fine: # strace -equotactl setquota -g petr $((2**32-1)) 0 0 0 /mnt/12tb/ quotactl(Q_GETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=4294967295, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0 quotactl(Q_SETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=4294967295, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0 And this one fails: # strace -equotactl setquota -g petr $((2**32)) 0 0 0 /mnt/12tb/ quotactl(Q_GETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=4294967295, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0 quotactl(Q_SETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=4294967296, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0 # repquota -g /mnt/12tb/ *** Report for group quotas on device /dev/loop0 Block grace time: 7days; Inode grace time: 7days Block limits File limits Group used soft hard grace used soft hard grace ---------------------------------------------------------------------- quotactl(Q_SYNC|GRPQUOTA, "/dev/loop0", 0, NULL) = 0 root -- 159168 0 0 4 0 0 petr -- 12 0 0 1 0 0 Here you can see kernel reports 0: # strace -equotactl setquota -g petr $((2**32)) 0 0 0 /mnt/12tb/ quotactl(Q_GETQUOTA|GRPQUOTA, "/dev/loop0", 500, {bhardlimit=0, bsoftlimit=0, curspace=12288, ihardlimit=0, isoftlimit=0, curinodes=1, ...}) = 0 This bug should be reassigned to kernel team. # uname -a Linux rhel-5_5 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux Fixed in latest RHEL5, see fixed-in-version and bug #594609 -Eric *** This bug has been marked as a duplicate of bug 594609 *** |