Bug 1288186 - Double counting of quota
Double counting of quota
Status: CLOSED WONTFIX
Product: GlusterFS
Classification: Community
Component: quota (Show other bugs)
3.5.5
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Manikandan
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-03 14:02 EST by Neil Van Lysel
Modified: 2016-09-20 00:28 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-05-09 02:42:04 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Neil Van Lysel 2015-12-03 14:02:46 EST
Description of problem:
Disk usage of the volume is being double counted.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create 8x2 distributed-replicate volume
2. Enable quota on volume
3. Set quota limit of /test to 1TB
4. Mount volume on client and write data to it using dd
5. Check disk usage using du from client
6. List /test quota usage from gluster server

Actual results:
Disk usage is being double counted.

Expected results:
Disk usage should be calculated correctly.

Additional info:
ON CLIENT:
[root@client-1 ~]# df -h /home
Filesystem                        Size  Used Avail Use% Mounted on
storage-1:home                    273T   15T  259T   6% /home
[root@client-1 ~]# cd /home/test
[root@client-1 test]# du -sh
791G    .

ON GLUSTER SERVER:
[root@storage-1 ~]# gluster volume quota home list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/test                                    1.0TB       90%       1.6TB  0Bytes

[root@storage-1 ~]# gluster volume info
Volume Name: home
Type: Distributed-Replicate
Volume ID: 2694f438-08f6-48fc-a072-324d4701f112
Status: Started
Number of Bricks: 8 x 2 = 16
Transport-type: tcp
Bricks:
Brick1: storage-7:/brick1/home
Brick2: storage-8:/brick1/home
Brick3: storage-9:/brick1/home
Brick4: storage-10:/brick1/home
Brick5: storage-1:/brick1/home
Brick6: storage-2:/brick1/home
Brick7: storage-3:/brick1/home
Brick8: storage-4:/brick1/home
Brick9: storage-5:/brick1/home
Brick10: storage-6:/brick1/home
Brick11: storage-11:/brick1/home
Brick12: storage-12:/brick1/home
Brick13: storage-13:/brick1/home
Brick14: storage-14:/brick1/home
Brick15: storage-15:/brick1/home
Brick16: storage-16:/brick1/home
Options Reconfigured:
performance.cache-size: 100MB
performance.write-behind-window-size: 100MB
nfs.disable: on
features.quota: on
features.default-soft-limit: 90%


GLUSTER SERVER PACKAGES:
[root@storage-1 ~]# rpm -qa |grep gluster
glusterfs-cli-3.5.5-2.el6.x86_64
glusterfs-server-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
glusterfs-api-3.5.5-2.el6.x86_64


GLUSTER CLIENT PACKAGES:
[root@client-1 ~]# rpm -qa |grep gluster
glusterfs-api-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
Comment 1 Manikandan 2015-12-08 07:10:41 EST
Hey Neil,

We have fixed quite a lot of issues with quota accounting in 3.6 and 3.7. As we have already mentioned on the other bug(#1288195), we need to look into this issue and the regressions the backporting fix could cause. Accordingly, we will fix and you could expect the fix in one of the next upcoming minor release of 3.5. One other way is we can manually fix the same in the backend. If you want to manual fix it, we will update you on how that can be done. 

Thank you :-)

--
Thanks & Regards,
Manikandan Selvaganesh.
Comment 2 Neil Van Lysel 2015-12-08 10:49:09 EST
Hey Manikandan,

Thanks for looking into backporting the fix to the 3.5 branch! :) 

Thanks much!
Neil
Comment 3 Manikandan 2015-12-17 00:52:43 EST
Hi Neil,

While looking at the issue, we suspect the problem could be with dht/afr translator. If you want the disk usage using the df utility to report by taking quota limits into consideration, run the following command:

    # gluster volume set VOLNAME quota-deem-statfs on

In this case, the total disk space of the directory is taken as the quota hard limit set on the directory of the volume and quota will report accordingly when you issue disk usage.

You could find more about the same on [1]

[1] https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Directory%20Quota/

Could you please enable quota-deem-statfs feature and check the usage!

--
Thanks & regards,
Manikandan Selvaganesh.
Comment 4 Neil Van Lysel 2015-12-27 12:38:36 EST
Hi Manikandan,

I enabled quotas on the volume again and set the "quota-deem-statfs" to "on". The quota counts are no longer being double counted, instead they're just completely incorrect:

[root@storage-1 ~]# gluster volume quota home list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/test                                     50.0TB       80%       4.6TB  45.4TB

[root@client-1 ~]$ df -h /home/test
Filesystem                        Size  Used Avail Use% Mounted on
storage-1:home                    50T   24T   27T  47% /home

[root@client-1 ~]$ du -sh /home/test
110G    /home/test


Also, I'm seeing the following messages over and over in the brick logs on the Gluster servers:
[2015-12-27 17:32:36.251053] E [marker-quota.c:1830:mq_fetch_child_size_and_contri] (-->/usr/lib64/glusterfs/3.5.5/xlator/features/changelog.so(changelog_setxattr_cbk+0xe3) [0x7fbc69cbc9a3] (-->/usr/lib64/glusterfs/3.5.5/xlator/features/access-control.so(posix_acl_setxattr_cbk+0xb9) [0x7fbc69aad1e9] (-->/usr/lib64/glusterfs/3.5.5/xlator/performance/io-threads.so(iot_setxattr_cbk+0xb9) [0x7fbc6967cc79]))) 0-: Assertion failed: !"uuid null"
[2015-12-27 17:32:36.251085] E [posix.c:136:posix_lookup] 0-home-posix: null gfid for path /user/relax_source/mixed_H30_T40_W15_EC10_51017-51018_V030/673.op001-s23.grd
[2015-12-27 17:32:36.251115] E [posix.c:153:posix_lookup] 0-home-posix: lstat on (null) failed: Invalid argument
[2015-12-27 17:32:36.251136] W [marker-quota.c:1652:mq_update_inode_contribution] 0-home-marker: failed to get size and contribution of path (/user/relax_source/mixed_H30_T40_W15_EC10_51017-51018_V030/673.op001-s23.grd)(Invalid argument)
[2015-12-27 17:32:36.251184] W [marker-quota.c:1416:mq_release_parent_lock] (-->/usr/lib64/glusterfs/3.5.5/xlator/performance/io-threads.so(iot_lookup_cbk+0xd9) [0x7fbc6967edf9] (-->/usr/lib64/libglusterfs.so.0(default_lookup_cbk+0xd9) [0x34ae829469] (-->/usr/lib64/glusterfs/3.5.5/xlator/features/marker.so(mq_update_inode_contribution+0x447) [0x7fbc6925dd07]))) 0-home-marker: An operation during quota updation of path (/user/relax_source/mixed_H30_T40_W15_EC10_51017-51018_V030/673.op001-s23.grd) failed (Invalid argument)

Neil
Comment 5 Manikandan 2016-05-09 02:42:04 EDT
Hi Neil,

Sorry, at this moment, back porting the fix is not possible since(as mentioned already) there are lot of changes that has gone in 3.7. We are closing this bug as this is not possible to fix in 3.5. Please consider upgrading and let us know if you face any issues in the latest version.


--
Thanks & regards,
Manikandan Selvaganesh.

Note You need to log in before you can comment on or make changes to this bug.