Description of problem: Once you enable enable quota on the volume and set limit-usage value on / of the volume, df -h command on client shows value more than the actual volume size. Version-Release number of selected component (if applicable): [root@node1 ~]# rpm -qa | grep glusterfs glusterfs-3.7.0-2.el6rhs.x86_64 glusterfs-cli-3.7.0-2.el6rhs.x86_64 glusterfs-libs-3.7.0-2.el6rhs.x86_64 glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64 glusterfs-api-3.7.0-2.el6rhs.x86_64 glusterfs-server-3.7.0-2.el6rhs.x86_64 glusterfs-fuse-3.7.0-2.el6rhs.x86_64 glusterfs-geo-replication-3.7.0-2.el6rhs.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create 4+2 Disperse volume 2. Mount volume on client as fuse or nfs mount 3. Enable quota on volume e.g gluster v quota <volname> enable 4. set deem-statfs on e.g gluster v set <ecvol> features.quota-deem-statfs on 5. set limit-usage on root of the volume e.g gluster v quota ecvol limit-usage / 30GB [root@node1 ~]# gluster v quota ecvol list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? --------------------------------------------------------------------------------------------------------------------------- / 30.0GB 80% 0Bytes 30.0GB No No Actual results: df -h on client before setting quota limits ============================================================== [root@client ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_client-lv_root 18G 12G 5.3G 68% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/vda1 477M 33M 419M 8% /boot 10.70.47.143:ecvol 40G 131M 40G 1% /mnt/ecvol df -h on client after setting quota limits [root@client ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_client-lv_root 18G 12G 5.3G 68% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/vda1 477M 33M 419M 8% /boot 10.70.47.143:ecvol 120G 0 120G 0% /mnt/ecvol Expected results: df -h should should the show the limit-usage value set [root@client ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_client-lv_root 18G 12G 5.3G 68% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/vda1 477M 33M 419M 8% /boot 10.70.47.143:ecvol 30G 0 30G 0% /mnt/ecvol <-----------Value set by limit-usage command Additional info: [root@node1 ~]# gluster v info Volume Name: ecvol Type: Disperse Volume ID: 822a2f30-3bae-4f98-ba12-9deeaf0c94bd Status: Started Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick1/ec1 Brick2: 10.70.47.145:/rhs/brick1/ec2 Brick3: 10.70.47.150:/rhs/brick1/ec3 Brick4: 10.70.47.151:/rhs/brick1/ec4 Brick5: 10.70.47.143:/rhs/brick4/ec5 Brick6: 10.70.47.145:/rhs/brick4/ec6 Options Reconfigured: features.quota-deem-statfs: on features.inode-quota: on features.quota: on features.uss: enable performance.readdir-ahead: on
df -h on client before setting quota limits [root@client glusterfs]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_client-lv_root 18G 6.1G 11G 38% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/vda1 477M 33M 419M 8% /boot 10.70.33.214:ecvol 3.7T 134M 3.7T 1% /mnt/glusterfs [root@darkknightrises ~]# gluster v quota ecvol limit-usage / 100GB [root@client glusterfs]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_client-lv_root 18G 6.1G 11G 38% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/vda1 477M 33M 419M 8% /boot 10.70.33.214:ecvol 100G 0 100G 0% /mnt/glusterfs [root@darkknightrises ~]# gluster v quota ecvol limit-usage / 200TB volume quota : success [root@client glusterfs]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_client-lv_root 18G 6.1G 11G 38% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/vda1 477M 33M 419M 8% /boot 10.70.33.214:ecvol 200T 0 200T 0% /mnt/glusterfs Bug verified on build glusterfs-3.7.1-7.el6rhs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html