Bug 1003549 - quota: deem-statfs on and df shows wrong value of limit set
quota: deem-statfs on and df shows wrong value of limit set
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: vpshastry
Saurabh
: ZStream
Depends On: 1002885
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-02 06:49 EDT by Saurabh
Modified: 2016-01-19 01:12 EST (History)
11 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.38rhs
Doc Type: Bug Fix
Doc Text:
Previously, when deem-statfs option is set on a quota enabled volume, 'df' reports incorrect size values. Now, with this update, when deem-statfs option is set on, 'df' reports the size as hard-limit configured on the directory.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-27 10:36:11 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-09-02 06:49:27 EDT
Description of problem:
even after quota-deem-statfs is set to "on"
df -h command shows wrong value of limit set earlier on the root of the volume,


snippet of the strace on df,

statfs("/home", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=55618469, f_bfree=55570540, f_bavail=52745273, f_files=14131200, f_ffree=14131184, f_fsid={-67356481, 1147110039}, f_namelen=255, f_frsize=4096}) = 0
write(1, "/dev/mapper/vg_rhsauto036-lv_hom"..., 34/dev/mapper/vg_rhsauto036-lv_home
) = 34
write(1, "                      213G  188M"..., 50                      213G  188M  202G   1% /home
) = 50
statfs("/proc/sys/fs/binfmt_misc", {f_type=0x42494e4d, f_bsize=4096, f_blocks=0, f_bfree=0, f_bavail=0, f_files=0, f_ffree=0, f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0
statfs("/var/lib/nfs/rpc_pipefs", {f_type=0x67596969, f_bsize=4096, f_blocks=0, f_bfree=0, f_bavail=0, f_files=0, f_ffree=0, f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0
statfs("/mnt/nfs-test", {f_type="NFS_SUPER_MAGIC", f_bsize=65536, f_blocks=983040, f_bfree=983040, f_bavail=983040, f_files=1335742464, f_ffree=1335741243, f_fsid={0, 0}, f_namelen=255, f_frsize=65536}) = 0
write(1, "10.70.37.213:/dist-rep\n", 2310.70.37.213:/dist-rep
) = 23
write(1, "                       60G     0"..., 58                       60G     0   60G   0% /mnt/nfs-test
) = 58
close(1)                                = 0
munmap(0x7fc98aeae000, 4096)            = 0
close(2)                                = 0
exit_group(0)                           = ?


Version-Release number of selected component (if applicable):
glusterfs-3.4.0.30rhs-2.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. set limit of 10GB on the root of the volume
2. set quota-deem-statfs on the volume
3. df -h

Actual results:

[root@nfs1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir1/dir2                                 1.0GB       80%       1.0GB  0Bytes
/                                         10.0GB       80%       2.0GB   8.0GB
[root@nfs1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_nfs1-lv_root
                       50G  1.8G   45G   4% /
tmpfs                 4.0G     0  4.0G   0% /dev/shm
/dev/vda1             485M   32M  428M   7% /boot
/dev/mapper/vg_nfs1-lv1
                      425G  1.3G  424G   1% /rhs/bricks
localhost:dist-rep     60G     0   60G   0% /tmp/dist-rep


Expected results:
Size should be 10GB
Used and Avail fields should be based on that only

Additional info:
Comment 2 Amar Tumballi 2013-09-04 06:33:52 EDT
https://code.engineering.redhat.com/gerrit/#/c/12423
Comment 5 vpshastry 2013-10-09 02:30:06 EDT
This is seen only after the add-brick in my machine, I see you've also added bricks before doing the above test.

Reason: This is because, after the add-brick, the extended attribute 'trusted.glusterfs.quota.limit-set'(this indicates that the quota has been set on the directory) is not healed to the new brick.

Can you please confirm whether this behavior observed ONLY after the add-brick?
Comment 8 Saurabh 2013-11-04 07:05:27 EST
as can be seen from the below mentioned logs, that after add-brick the df and quota list are showing same result, given the fact that quota-deem-statfs is on,

[root@quota1 ~]# gluster volume add-brick dist-rep5 10.70.42.186:/rhs/brick1/d1r15-add 10.70.43.181:/rhs/brick1/d1r25-add
volume add-brick: success
[root@quota1 ~]# gluster volume info dist-rep5
 
Volume Name: dist-rep5
Type: Distributed-Replicate
Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7
Status: Started
Number of Bricks: 7 x 2 = 14
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r15
Brick2: 10.70.43.181:/rhs/brick1/d1r25
Brick3: 10.70.43.18:/rhs/brick1/d2r15
Brick4: 10.70.43.22:/rhs/brick1/d2r25
Brick5: 10.70.42.186:/rhs/brick1/d3r15
Brick6: 10.70.43.181:/rhs/brick1/d3r25
Brick7: 10.70.43.18:/rhs/brick1/d4r15
Brick8: 10.70.43.22:/rhs/brick1/d4r25
Brick9: 10.70.42.186:/rhs/brick1/d5r15
Brick10: 10.70.43.181:/rhs/brick1/d5r25
Brick11: 10.70.43.18:/rhs/brick1/d6r15
Brick12: 10.70.43.22:/rhs/brick1/d6r25
Brick13: 10.70.42.186:/rhs/brick1/d1r15-add
Brick14: 10.70.43.181:/rhs/brick1/d1r25-add
Options Reconfigured:
features.quota-deem-statfs: on
features.quota: on
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_quota1-lv_root
                       44G  2.5G   40G   6% /
tmpfs                 4.9G     0  4.9G   0% /dev/shm
/dev/vda1             485M   32M  428M   7% /boot
/dev/mapper/RHS_vgvdb-RHS_lv1
                      421G  411G  9.9G  98% /rhs/brick1
localhost:dist-rep5    25G   25G     0 100% /var/run/gluster/dist-rep5
[root@quota1 ~]# 
[root@quota1 ~]# 
[root@quota1 ~]# gluster volume quota dist-rep5 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dir                                       5.0GB       80%       5.0GB  0Bytes
/                                         25.0GB       80%      25.0GB  0Bytes


verified on glusterfs-3.4.0.38rhs-1
Comment 10 errata-xmlrpc 2013-11-27 10:36:11 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html

Note You need to log in before you can comment on or make changes to this bug.