Description of problem: volume type is 2x2, nfs mounted, and then created some data. after that enabled quota, set limit of 1GB and on the nfs mount point I deleted data, using "rm -rf" then did a list for the volume. Version-Release number of selected component (if applicable): glusterfs-3.4.0.33rhs-1.el6rhs.x86_64 How reproducible: executed the below and seen the result Steps to Reproduce: 1.nfs mounted a volume, and then created some data. 2.after that enabled quota, set limit of 1GB 3.and on the nfs mount point I deleted data, using "rm -rf" 4. executed quota list command. Actual results: [root@nfs1 bricks]# gluster volume quota newvol22 list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 1.0GB 80% 16384.0PB 1.0GB Expected results: when data is not there, then it should not display data in PB. Additional info:
Created attachment 800455 [details] sosreport
The bug is not reproducible on 3.4.0-rhs-33. [root@vm1 nfs]# gluster volume quota dist-repl disable Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y volume quota : success [root@vm1 nfs]# cp -rf /usr . ^C [root@vm1 nfs]# gluster volume quota dist-repl enable volume quota : success [root@vm1 nfs]# gluster volume quota dist-repl limit-usage / 1GB volume quota : success [root@vm1 nfs]# gluster volume quota dist-repl list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 1.0GB 80% 545.9MB 478.1MB [root@vm1 nfs]# rm -rf * [root@vm1 nfs]# gluster volume quota dist-repl list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 1.0GB 80% 0Bytes 1.0GB Can you please confirm that the bug is not reproducible?
To reproduce this bug perform the following steps: 1. Setup quota 2. Set soft-limit to 80% 3. Do some intense IO on the mount point, add/delete data Maybe run 10-20 instances of dbench 4. quota list alternatively prints ridiculous values [root@boo ~]# gluster volume quota pure_gold list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 30.0GB 80% 77.9GB 0Bytes [root@boo ~]# gluster volume quota pure_gold list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 30.0GB 80% 16384.0PB 31.2GB [root@boo ~]# gluster volume quota pure_gold list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 30.0GB 80% 77.9GB 0Bytes [root@boo ~]# gluster volume quota pure_gold list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 30.0GB 80% 16384.0PB 31.2GB [root@boo ~]#
dbench cli: dbench -t 300 -c /mnt/fuse//11408/dbench/client.txt -s -S 10
I tried reproducing the test case with the steps given. However, am not able to reproduce the issue. [root@vm2 mnt]# /opt/qa/tools/dbench -t 300 -c /opt/qa/tools/client.txt -s -S 10; Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 26473 37.432 1258.044 Close 19093 1.743 1165.775 Rename 1080 98.814 221.064 Unlink 5587 19.179 238.631 Qpathinfo 23772 19.884 1205.118 Qfileinfo 3964 0.568 17.505 Qfsinfo 4422 7.237 523.165 Sfileinfo 2010 5.429 26.634 Find 9104 112.855 1269.385 WriteX 12582 11.840 1278.146 ReadX 40123 1.391 44.671 LockX 80 4.217 13.789 UnlockX 80 4.001 10.956 Flush 1752 6.362 181.152 Throughput 2.71531 MB/sec (sync open) (sync dirs) 10 clients 10 procs max_latency=1278.152 ms [root@vm2 mnt]# gluster volume quota dist list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 10.0GB 80% 0Bytes 10.0GB Can you please confirm that the bug is not reproducible?
I saw this issue on "glusterfs-server-3.4.0.35rhs-1.el6rhs.x86_64" [root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dis-rep-1 100.0MB 80% 16384.0PB 100.0MB /fuse 1.0GB 80% 49.6MB 974.4MB /fuse/subdir1 500.0MB 80% 0Bytes 500.0MB Steps to reproduce 1. Create a directory on gluster mount point 2. Set the quota on it (e.g: 1GB) 3. Run file I/O till you get quota exceeded error Step 4 and 5 should be done in parallel 4. delete the contents of the directory 5. Set the quota on the directory to a lower value (e.g: 100MB) Sosreports are available at http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1010248/ as I am not able to attach them to bugzilla as one of the report is more than 20MB
Per bug triage 10/17.
For work around of the issue I tried enabling/disabling quota on the volume but even after quota disable/enable on the volume, setting quota on the directory shows "Used field shows data in "PB"" . See the flow of the exact commands [root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dis-rep-1 100.0MB 80% 16384.0PB 100.0MB /fuse 1.0GB 80% 49.6MB 974.4MB /fuse/subdir1 500.0MB 80% 0Bytes 500.0MB [root@bvt-rhs1 ~]# gluster v quota dis-rep-1 disable Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y volume quota : success [root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list quota command failed : Quota is not enabled on volume dis-rep-1 [root@bvt-rhs1 ~]# gluster v quota dis-rep-1 enable volume quota : success [root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list quota: No quota configured on volume dis-rep-1 [root@bvt-rhs1 ~]# gluster v quota dis-rep-1 limit-usage /dis-rep-1 100MB volume quota : success [root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dis-rep-1 100.0MB 80% 16384.0PB 100.0MB
The code which cleans up quota xattrs has not gone in the build you are using. Hence disabling and enabling quota is not making the issue go away. The patch itself can be found at: https://code.engineering.redhat.com/gerrit/#/c/14463/ As far as the reproducibility of the issue, I tried reproducing by running dbench, but was unable to hit the issue. The issue may as well be a race-condition which cannot be reproduced consistently. Is the bug consistently reproducible on your setup? A test case which can consistently reproduce this issue will be of great help. regards, Raghavendra.
Sorry, missed out Lala's comment on reproduciblity. Will get back trying out the steps given.
[root@server1 ~]# gluster vol quota shanks-quota list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 100.0GB 80% 16384.0PB 100.0GB /shanks/Music 10.0GB 80% 0Bytes 10.0GB [root@server1 ~]# Seeing this with glusterfs-server-3.4.0.38rhs-1.el6rhs.x86_64
And again: [root@server1 ~]# gluster vol quota shanks-quota list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 200.0GB 80% 16384.0PB 203.0GB /shanks/Downloads 50.0GB 80% 16384.0PB 53.0GB [root@server1 ~]# Version: glusterfs-server-3.4.0.39rhs-1.el6rhs.x86_64
3.4.0.38 is more susceptible for this behaviour, since there was a regression introduced in rename handling. The patch which fixes this particular regression can be found at: https://code.engineering.redhat.com/gerrit/#/c/15125/
I am still seeing this with version glusterfs-server-3.4.0.39rhs-1.el6rhs.x86_64
Steps I followed to hit this: 1. Existing data on directory. (/home/shanks/Downloads in this case) 2. Enabled quota 3. set limit on / (200G in this case) 4. set limit on /shanks/Downloads (50G in this case) 5. rm -fr data in /home/shanks/Downloads at client. 6. quota list at server Version: glusterfs-server-3.4.0.39rhs-1.el6rhs.x86_64
(In reply to Gowrishankar Rajaiyan from comment #15) Shanks, > I am still seeing this with version > glusterfs-server-3.4.0.39rhs-1.el6rhs.x86_64 The fix has not gone into glusterfs-server-3.4.0.39rhs yet.
Verified on glusterfs-3.4.0.44rhs-1.el6rhs.x86_64.rpm.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html