Description of problem: After setting limit-usage when you disable and enable quota, you expect that the client side df should show the actual size since there are no quota configuration after re-enabling it. However, after re-enabling quota, if you check the df at client, you see that the limit-usage set before the quota was disabled is reflected. Version-Release number of selected component (if applicable): glusterfs-server-3.4.0.33rhs-1.el6rhs.x86_64 How reproducible: Always Steps to Reproduce: 0. Ensure that the client has already mounted the volume. 1. Enable quota on the volume specified in "step 0". 2. Enable features.quota-deem-statfs volume option. 3. Set limit-usage to a certain value (35GB in this case). Result: "df -h" shows the size set in limit-usage as the total size of the mount, as expected. 4. Disable quota Result: "df -h" now shows the actual size before any quota was set, as expected. 5. Enable quota 6. Execute "df -h" on the client. Actual results: "df -h" shows the limit-usage value (35GB) on the mount which was set before disabling quota. However, quota list shows "quota: No quota configured on volume vmstore". Expected results: Since quota is disabled and then enabled and that that list command does not show any quota to be configured, the client should not inherit the limit-usage which was set before the quota was disabled. Additional info: umounting and re-mounting the volume doesn't help either.
RHS: [root@ninja vmstore]# gluster volume quota vmstore limit-usage / 40GB volume quota : success [root@ninja vmstore]# [root@ninja vmstore]# gluster vol info vmstore Volume Name: vmstore Type: Replicate Volume ID: c96de15d-024e-416d-a1c5-ff5fef44b25b Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.70.34.68:/rhs1/vmstore Brick2: 10.70.34.56:/rhs1/vmstore Options Reconfigured: features.quota-deem-statfs: on features.quota: on storage.owner-gid: 107 storage.owner-uid: 107 network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off [root@ninja vmstore]# Client: [root@rhs-client37 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol01 3.6T 2.1G 3.4T 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 485M 40M 421M 9% /boot ninja.lab.eng.blr.redhat.com:vmstore 40G 12G 29G 30% /var/lib/libvirt/images [root@rhs-client37 ~]# RHS: [root@ninja vmstore]# gluster volume quota vmstore disable Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y volume quota : success [root@ninja vmstore]# Client: [root@rhs-client37 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol01 3.6T 2.1G 3.4T 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 485M 40M 421M 9% /boot ninja.lab.eng.blr.redhat.com:vmstore 195G 14G 181G 7% /var/lib/libvirt/images [root@rhs-client37 ~]# RHS: [root@ninja vmstore]# gluster volume quota vmstore enable volume quota : success [root@ninja vmstore]# gluster volume quota vmstore list quota: No quota configured on volume vmstore [root@ninja vmstore]# Client: [root@rhs-client37 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol01 3.6T 2.1G 3.4T 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 485M 40M 421M 9% /boot ninja.lab.eng.blr.redhat.com:vmstore 40G 12G 29G 30% /var/lib/libvirt/images [root@rhs-client37 ~]#
Shanks, The quota limit configuration is stored in extended attributes of the directory on which quota is set. We don't remove the quota configuration extended attributes on quota-disable, which is why you are seeing "df -h", whose behaviour is defined by the presence of quota extended attributes and "deem-statfs" value set on the volume alone, display statistics as though quota was enabled. In the current implementation, targeted for Big Bend, we have decided not to clean up the quota limits extended attributes, by crawling the entire volume, on quota-disable. I think we need to revisit targeting a fix for this in Update 1.
Per bug triage 10/17. need workaround if not fixed.
*** Bug 1002961 has been marked as a duplicate of this bug. ***
[root@server1 ~]# gluster vol info shanks-quota Volume Name: shanks-quota Type: Distributed-Replicate Volume ID: 95c6d6fc-2977-461b-b68e-347ad9e85b5c Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.43.3:/rhs/busy-files/quota1 Brick2: 10.70.43.199:/rhs/busy-files/quota1 Brick3: 10.70.43.156:/rhs/busy-files/quota1 Brick4: 10.70.43.1:/rhs/busy-files/quota1 Options Reconfigured: features.quota-deem-statfs: on features.quota: on [root@server1 ~]# gluster vol quota shanks-quota list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 70.0GB 80% 0Bytes 70.0GB [root@server1 ~]# Client: [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 42G 3.4G 36G 9% / tmpfs 3.8G 72K 3.8G 1% /dev/shm /dev/vda1 485M 83M 378M 18% /boot 10.70.43.3:/shanks-quota 70G 0 70G 0% /shanks-quota-fuse [root@localhost ~]# Server: [root@server1 ~]# gluster vol quota shanks-quota disable Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y volume quota : success [root@server1 ~]# Client: [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 42G 3.4G 36G 9% / tmpfs 3.8G 72K 3.8G 1% /dev/shm /dev/vda1 485M 83M 378M 18% /boot 10.70.43.3:/shanks-quota 200G 30G 171G 15% /shanks-quota-fuse [root@localhost ~]# Server: [root@server1 ~]# gluster vol quota shanks-quota enable volume quota : success [root@server1 ~]# Client: [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 42G 3.4G 36G 9% / tmpfs 3.8G 72K 3.8G 1% /dev/shm /dev/vda1 485M 83M 378M 18% /boot 10.70.43.3:/shanks-quota 100G 26G 75G 26% /shanks-quota-fuse <<<<<<<< [root@localhost ~]#
After disabling and enabling quota, the client shows the brick size from server1 which in this case is 100GB instead of the volume size which is 200GB (100GB per brick in distribute replicate).
Quota on directory: [root@localhost shanks-quota2-fuse]# getfattr -m . -d -e hex test1 # file: test1 trusted.glusterfs.quota.limit-set=0x0000000c80000000ffffffffffffffff [root@localhost shanks-quota2-fuse]# After removing quota limit-usage from test2 [root@localhost shanks-quota2-fuse]# getfattr -m . -d -e hex test1 [root@localhost shanks-quota2-fuse]# Quota on Volume: [root@localhost shanks-quota2-fuse]# getfattr -m . -d -e hex /shanks-quota2-fuse getfattr: Removing leading '/' from absolute path names # file: shanks-quota2-fuse trusted.glusterfs.quota.limit-set=0x0000003200000000ffffffffffffffff trusted.glusterfs.volume-id=0x5bb529022c5043e2898e8d51d15e9cf6 [root@localhost shanks-quota2-fuse]# After disabling quota [root@localhost shanks-quota2-fuse]# getfattr -m . -d -e hex /shanks-quota2-fuse getfattr: Removing leading '/' from absolute path names # file: shanks-quota2-fuse trusted.glusterfs.volume-id=0x5bb529022c5043e2898e8d51d15e9cf6 [root@localhost shanks-quota2-fuse]# Note: It has been informed that admin needs to perform "[root@server1 ~]# ps ax | grep "quota-remove-xattr.sh" to ensure that the removal activity of xattr is complete. This information should be included as part of administration guide.
(In reply to Gowrishankar Rajaiyan from comment #8) ... > Note: It has been informed that admin needs to perform "[root@server1 ~]# ps > ax | grep "quota-remove-xattr.sh" to ensure that the removal activity of > xattr is complete. This information should be included as part of > administration guide. Now covered in doc bug 1025792
One can determine if the quota xattr cleanup is complete, by checking if the script that is performing the xattr removal is still running. The location of the script is /usr/libexec/glusterfs/quota/quota-remove-xattr.sh ie., # ps ax | grep "quota-remove-xattr.sh"
once you put quota-deem-statfs to off and disable quota after that on a volume df -h display the information as intended, i.e. cumulative of bricks. even after an enable of quota again, the df -h displays the information as cumulative of bricks. as can be seen, [root@quota1 ~]# gluster volume info dist-rep Volume Name: dist-rep Type: Distributed-Replicate Volume ID: 7560c789-c47e-4e18-9860-95956f4a7b1d Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.42.186:/rhs/brick1/d1r1 Brick2: 10.70.43.181:/rhs/brick1/d1r2 Brick3: 10.70.43.18:/rhs/brick1/d2r1 Brick4: 10.70.43.22:/rhs/brick1/d2r2 Brick5: 10.70.42.186:/rhs/brick1/d3r1 Brick6: 10.70.43.181:/rhs/brick1/d3r2 Brick7: 10.70.43.18:/rhs/brick1/d4r1 Brick8: 10.70.43.22:/rhs/brick1/d4r2 Brick9: 10.70.42.186:/rhs/brick1/d5r1 Brick10: 10.70.43.181:/rhs/brick1/d5r2 Brick11: 10.70.43.18:/rhs/brick1/d6r1 Brick12: 10.70.43.22:/rhs/brick1/d6r2 Options Reconfigured: features.quota-deem-statfs: off features.quota: off [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# gluster volume quota dist-rep enable volume quota : success [root@rhsauto005 ~]# df -h Filesystem Size Used Avail Use% Mounted on 10.70.42.186:/dist-rep 2.2T 1.4T 791G 64% /mnt/nfs-test 10.70.42.186:/dist-rep 2.2T 1.4T 791G 64% /mnt/glusterfs-test-dist Hence moving it to Verified. there is a scenario where if you leave quota-deem-statfs on , subsequently do a disable and enable, given the fact one enables quota after the remove-xattr.sh is finished, then df -h "size" field displays the value of one brick not the cumulative volume of bricks. Hence for that we have filed BZ 1030432.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html