Description of problem: tried to mv data one directory(dir1) to another in a directory(dir2) already having some data in it. dir1 and dir2 both have 2GB quota limit set and reached this limit. Version-Release number of selected component (if applicable): glusterfs-3.4.0.34rhs-1.el6rhs.x86_64 How reproducible: always Steps to Reproduce: 1. create a volume, start it. 2. enable quota on the volume 3. mount the volume using NFS 4. create two directories inside the mount point. 5. gluster volume quota <vol-name> limit-usage /dir{1-2} 2GB 6. create data inside dir1 till limit is not filled. 7. gluster volume quota <vol-name> list 8. rename all the files inside the dir1. 9. gluster volume quota <vol-name> list 10. copy the data from dir1 to dir2. 11. gluster volume quota <vol-name> list 12. mv data from dir1 to dir2. 13. gluster volume quota <vol-name> list Actual results: now here the till step 11 it works, the result of step 11 is this, [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir1 2.0GB 80% 2.0GB 0Bytes /dir2 2.0GB 80% 2.0GB 64.0KB now for step 12 and step 13, has a failure, on client executed this command, [root@rhsauto030 nfs-test]# mv -f dir1/* dir2/ after this on rhs-node executed this command, [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir1 2.0GB 80% 1.8GB 200.0MB /dir2 2.0GB 80% 648.0MB 1.4GB whereas in /dir1 there is nothing remaining, as per the ls on mountpoint [root@rhsauto030 nfs-test]# ls dir1/ [root@rhsauto030 nfs-test]# Though after a long delay, quota list respondes this, [root@quota1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir1 2.0GB 80% 1.8GB 200.0MB /dir2 2.0GB 80% 2.0GB 0Bytes seems there is still a miscalculation for 1.8GB. where as there is nothing inside /dir1. Expected results: /dir1 should be zero now. Also, before all this in this case when both dir1 and dir2 are having quota limits reached, so moving data from dir1 to dir2 should result in "EDQUOT" Additional info:
Saurabh, Can you please provide the exact names of the files before and after rename. Its a distribute volume and names are important since there can be different rename scenarios in dht based on names. It would be better if you copy/paste the exact comments you used. regards, Raghavendra.
well, most of the relevant information is provided in the description section, and its 20days since this test was executed, and I can't collect the history of the commands from system. but usually I keep the names like $i, where i represents a numeral. and rename may be like mv $i $i-rename.
Created attachment 820311 [details] Test script to check quota accounting in different rename scenarios with distribute
https://code.engineering.redhat.com/gerrit/#/c/15123/ https://code.engineering.redhat.com/gerrit/#/c/14843/ https://code.engineering.redhat.com/gerrit/#/c/14991/ These three fixes collectively should fix accounting issues during different rename scenarios with distribute. The fixes were tested with the script attached in my previous comment.
I tried out the similar test as mentioned in steps to reproduce section. filed the directories with data, [root@quota1 gluster]# gluster volume quota dist-rep list /dir5 Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir5 50.0GB 80% 50.0GB 0Bytes [root@quota1 gluster]# gluster volume quota dist-rep list /dir4 Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir4 50.0GB 80% 50.0GB 0Bytes mount-point, [qa1@rhsauto005 nfs-test]$ mount | grep 10.70.42.186:/dist-rep 10.70.42.186:/dist-rep on /mnt/nfs-test type nfs (rw,addr=10.70.42.186) then tried to mv data from one dir to another one, it says Permission denied, but I assume it should say "Disk quota exceeded" [qa1@rhsauto005 nfs-test]$ ls -ldi dir4 11242701225515877322 drwxr-xr-x. 2 qa1 qa1 49152 Nov 8 11:20 dir4 [qa1@rhsauto005 nfs-test]$ ls -ldi dir5 12418057083054750413 drwxr-xr-x. 2 qa1 qa1 49152 Nov 8 11:22 dir5 [qa1@rhsauto005 nfs-test]$ [qa1@rhsauto005 nfs-test]$ [qa1@rhsauto005 nfs-test]$ ls -li dir4/1 11082972577280890575 -rw-rw-r--. 1 qa1 qa1 131072 Nov 8 11:04 dir4/1 [qa1@rhsauto005 nfs-test]$ mv dir4/1 dir5/ mv: cannot move `dir4/1' to `dir5/1': Permission denied Please, clarify if it suppose to get EPERM, though from my point of view it is incorrect.
As per discussion with developer and based on his debugging the Comment7 failure may be happening because of some acl issue. Presently we are testing the similar stuff on another cluster.
latest update, On other set of cluster, I tried to do the similar scenario and didn't see the EPERM rather as expected we got "EDQUOT" [qa1@rhslong03 nfs-test-dist-rep1]$ mv dir4/1 dir5/ mv: cannot move `dir4/1' to `dir5/1': Disk quota exceeded after the mv operation, directories in consideration are dir4 and dir5 [root@quota5 ~]# gluster volume quota dist-rep1 list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 1.0TB 80% 377.4GB 646.6GB /qa1 200.0GB 80% 224.1GB 0Bytes /qa2-rename 300.0GB 75% 0Bytes 300.0GB /qa3/Downloads 50.0GB 80% 50.0GB 0Bytes /dir4 50.0GB 80% 50.0GB 0Bytes /dir5 50.0GB 80% 50.0GB 0Bytes
moving to verified based on comment 9
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html
I got this error on gluster 3.5.2 on ubuntu 12.04 after extensive use of a dir.. # gluster volume quota sas03 list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /EgoTempSata03 322.0GB 80% 262.4GB 59.6GB actual usage # du -sh EgoTempSata03/ 81G EgoTempSata03/