Description of problem: The size field of the quota feature is not getting updated properly, after moving the files from a directory to another directory. Presently, this problem is found when the volume is mounted over nfs. test execution results, from server, ------------- [root@172 ~]# gluster volume quota vol3 enable Enabling quota has been successful [root@172 ~]# gluster volume quota vol3 limit-usage /dir 10GB limit set on /dir [root@172 ~]# gluster volume info vol3 Volume Name: vol3 Type: Distribute Volume ID: 5c41ffcc-2dec-4cac-89cf-3920308cd427 Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: 172.17.251.90:/export/vol3-dr Brick2: 172.17.251.91:/export/vol3-drr Brick3: 172.17.251.92:/export/vol3-ddr Brick4: 172.17.251.93:/export/vol3-ddrr Options Reconfigured: features.limit-usage: /dir:10GB features.quota: on from client, ------------- 172.17.251.90:/vol3 on /mnt/nfs-vol3 type nfs (rw,vers=3,addr=172.17.251.90) [root@QA-31 ~]# cd /mnt/nfs-vol3/ [root@QA-31 nfs-vol3]# ls dir [root@QA-31 nfs-vol3]# cd dir/ [root@QA-31 dir]# for i in {1..5}; do dd if=/dev/zero of=f.$i bs=1024 count=1048576; done 1048576+0 records in 1048576+0 records out 1073741824 bytes (1.1 GB) copied, 20.6175 s, 52.1 MB/s 1048576+0 records in 1048576+0 records out 1073741824 bytes (1.1 GB) copied, 21.4808 s, 50.0 MB/s 1048576+0 records in 1048576+0 records out 1073741824 bytes (1.1 GB) copied, 14.3631 s, 74.8 MB/s 1048576+0 records in 1048576+0 records out 1073741824 bytes (1.1 GB) copied, 20.8098 s, 51.6 MB/s 1048576+0 records in 1048576+0 records out 1073741824 bytes (1.1 GB) copied, 16.0285 s, 67.0 MB/s [root@QA-31 dir]# pwd /mnt/nfs-vol3/dir [root@QA-31 dir]# ls f.1 f.2 f.3 f.4 f.5 [root@QA-31 dir]# cd .. [root@QA-31 nfs-vol3]# ls dir from server, ------------ [root@172 ~]# gluster volume quota vol3 list path limit_set size ---------------------------------------------------------------------------------- /dir 10GB 5.0GB from client, ------------ [root@QA-31 nfs-vol3]# mkdir dir1 [root@QA-31 nfs-vol3]# mv dir/f.5 dir1/ [root@QA-31 nfs-vol3]# ls dir1/ f.5 from server, -------------- [root@172 ~]# gluster volume quota vol3 list path limit_set size ---------------------------------------------------------------------------------- /dir 10GB 5.0GB [root@172 ~]# gluster volume quota vol3 limit-usage /dir1 5GB limit set on /dir1 [root@172 ~]# gluster volume quota vol3 list path limit_set size ---------------------------------------------------------------------------------- /dir 10GB 5.0GB /dir1 5GB 0Bytes Also, again to verify the above behavior I executed similar steps again, [root@QA-31 nfs-vol3]# mv dir/f.4 dir1/ [root@QA-31 nfs-vol3]# ls dir f.1 f.2 f.3 [root@QA-31 nfs-vol3]# ls dir1/ f.4 f.5 [root@QA-31 nfs-vol3]# [root@172 ~]# gluster volume quota vol3 list path limit_set size ---------------------------------------------------------------------------------- /dir 10GB 5.0GB /dir1 5GB 0Bytes [root@172 ~]# gluster volume quota vol3 list path limit_set size ---------------------------------------------------------------------------------- /dir 10GB 5.0GB /dir1 5GB 1.0GB [root@172 ~]# Version-Release number of selected component (if applicable): FROM SERVER:- ------------- glusterfs-3.3.0.4rhs-34.el6rhs.x86_64 glusterfs-rdma-3.3.0.4rhs-34.el6rhs.x86_64 org.apache.hadoop.fs.glusterfs-glusterfs-0.20.2_0.2-1.noarch glusterfs-devel-3.3.0.4rhs-34.el6rhs.x86_64 glusterfs-geo-replication-3.3.0.4rhs-34.el6rhs.x86_64 glusterfs-server-3.3.0.4rhs-34.el6rhs.x86_64 glusterfs-fuse-3.3.0.4rhs-34.el6rhs.x86_64 Linux 172.17.251.90 2.6.32-220.28.1.el6.x86_64 #1 SMP Wed Oct 3 12:26:28 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux [root@172 ~]# cat /etc/issue Red Hat Storage release 2.0 for On-Premise Kernel \r on an \m FROM CLIENT:- ----------- [root@QA-31 nfs-vol3]# cat /etc/issue Red Hat Enterprise Linux Server release 6.2 (Santiago) Kernel \r on an \m You have new mail in /var/spool/mail/root [root@QA-31 nfs-vol3]# uname -a Linux QA-31 2.6.32-220.26.1.el6.x86_64 #1 SMP Sat Aug 25 03:47:58 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux How reproducible: always Steps to Reproduce: 1. create a dist-rep (2x2) volume. 2. start the volume 3. enable quota on the volume 4. mount the volume over nfs 5. create a directory say "dir". 6. set quota limit say "10GB" for /dir 5. create data inside dir, files size each of 1GB. 6. create a directory dir1. 7. move a file from dir to dir1. 8. now set quota limit on dir1 say "5GB". Actual results: the dir size field after step 8 lists 5GB. the dir1 size fiels after step 8 lists 0GB. Expected results: the size field should be updated properly. Additional info: snippet of the nfs logs, [2012-10-29 06:23:47.469812] I [client-handshake.c:1636:select_server_supported_programs] 0-vol3-client-3: Using Program GlusterFS 3.3.0.4rhs, Num (1298437), Version (330) [2012-10-29 06:23:47.480721] I [client-handshake.c:1433:client_setvolume_cbk] 0-vol3-client-3: Connected to 172.17.251.93:24012, attached to remote volume '/export/vol3-ddrr'. [2012-10-29 06:23:47.480755] I [client-handshake.c:1445:client_setvolume_cbk] 0-vol3-client-3: Server and Client lk-version numbers are not same, reopening the fds [2012-10-29 06:23:47.481269] I [client-handshake.c:453:client_set_lk_version_cbk] 0-vol3-client-3: Server lk version = 1 [2012-10-29 06:24:04.153592] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:1cb3d481-b08a-46ac-9011-bc990c5e48d9) [2012-10-29 06:24:04.155900] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:1cb3d481-b08a-46ac-9011-bc990c5e48d9) [2012-10-29 06:24:04.159734] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:1cb3d481-b08a-46ac-9011-bc990c5e48d9) [2012-10-29 06:26:05.060571] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:be4c7978-6a14-4c95-9eb5-b9ebc742feff) [2012-10-29 06:26:05.064131] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:be4c7978-6a14-4c95-9eb5-b9ebc742feff) [2012-10-29 06:26:05.067136] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:be4c7978-6a14-4c95-9eb5-b9ebc742feff) [2012-10-29 06:26:08.932348] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:1cb3d481-b08a-46ac-9011-bc990c5e48d9) [2012-10-29 06:26:08.935337] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:1cb3d481-b08a-46ac-9011-bc990c5e48d9) [2012-10-29 06:26:08.939619] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:1cb3d481-b08a-46ac-9011-bc990c5e48d9) [2012-10-29 06:26:52.131155] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:1cb3d481-b08a-46ac-9011-bc990c5e48d9) [2012-10-29 06:26:52.133973] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:1cb3d481-b08a-46ac-9011-bc990c5e48d9) [2012-10-29 06:26:52.137595] W [quota.c:2177:quota_fstat_cbk] 0-vol3-quota: quota context not set in inode (gfid:1cb3d481-b08a-46ac-9011-bc990c5e48d9) (END)
sosreport collected from 172.17.251.90 can be accessed from this url, http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/871015/
will look into this.
This behaviour is observed only nfs mount. However, doing an ls in the directory fixes the issue. [root@vm2 dir2]# mv ../dir/f.5 . [root@vm2 dir2]# gluster volume quota dist list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 10.0GB 80% 5.0GB 5.0GB /dir 10.0GB 80% 5.0GB 5.0GB /dir2 10.0GB 80% 0Bytes 10.0GB [root@vm2 dir2]# ls f.5 [root@vm2 dir2]# gluster volume quota dist list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 10.0GB 80% 6.0GB 4.0GB /dir 10.0GB 80% 5.0GB 5.0GB /dir2 10.0GB 80% 1.0GB 9.0GB
Ignore the previous comment. The output is from a different test case.
Patch for this bug: https://code.engineering.redhat.com/gerrit/#/c/14844/ <snip> cluster/dht: instruct marker whenever it shouldn't do accounting This is needed for two reasons: * since dht-linkfiles are internal, they shouldn't be accounted. * hardlink handling in marker is broken. link/unlink of hardlinks present in same directory can break marker accounting. Hence, if src and dst are in same directory in case of rename, dht - if it breaks rename into link/unlink operations - should instruct marker to not to do accounting. Change-Id: Id14127d526c472ebee7bec1cfcdcb79ed2e2be72 BUG: 871015 Signed-off-by: Raghavendra G <rgowdapp> Reviewed-on: https://code.engineering.redhat.com/gerrit/14844 Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas> </snip> This patch has introduced a glusterfs regression tracked at bug 1025471. QE would want to verify this bug after bug 1025471. Hence moving this to ASSIGNED and requesting you to change the state with the next successful build.
tried to test scenario in glusterfs-3.4.0.38rhs-1 [root@quota1 ~]# gluster volume quota dist-rep5 list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir 5.0GB 80% 4.1GB 919.9MB / 50.0GB 80% 24.1GB 25.9GB /dir5 7.0GB 80% 5.0GB 2.0GB /dir3 15.0GB 80% 5.0GB 10.0GB moving the contents of data /dir to /dir3/dirn [root@rhsauto005 nfs-test]# mv dir/* dir3/dirn/ [root@rhsauto005 nfs-test]# [root@rhsauto005 nfs-test]# mv dir3/dirn/* dir/ [root@rhsauto005 nfs-test]# mkdir dir3/dirn1 [root@rhsauto005 nfs-test]# mv dir/* dir3/dirn1/ [root@rhsauto005 nfs-test]# df -h /mnt/nfs-test/dir3 Filesystem Size Used Avail Use% Mounted on 10.70.42.186:/dist-rep5 15G 9.2G 5.9G 61% /mnt/nfs-test [root@quota1 ~]# gluster volume quota dist-rep5 list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir 5.0GB 80% 0Bytes 5.0GB / 50.0GB 80% 24.1GB 25.9GB /dir5 7.0GB 80% 5.0GB 2.0GB /dir3 15.0GB 80% 9.1GB 5.9GB
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html