Bug 1003549
Summary: | quota: deem-statfs on and df shows wrong value of limit set | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Saurabh <saujain> |
Component: | glusterd | Assignee: | vpshastry <vshastry> |
Status: | CLOSED ERRATA | QA Contact: | Saurabh <saujain> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 2.1 | CC: | amarts, asriram, grajaiya, kparthas, mzywusko, nsathyan, rhs-bugs, saujain, vagarwal, vbellur, vshastry |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.4.0.38rhs | Doc Type: | Bug Fix |
Doc Text: |
Previously, when deem-statfs option is set on a quota enabled volume, 'df' reports incorrect size values. Now, with this update, when deem-statfs option is set on, 'df' reports the size as hard-limit configured on the directory.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2013-11-27 15:36:11 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1002885 | ||
Bug Blocks: |
Description
Saurabh
2013-09-02 10:49:27 UTC
This is seen only after the add-brick in my machine, I see you've also added bricks before doing the above test. Reason: This is because, after the add-brick, the extended attribute 'trusted.glusterfs.quota.limit-set'(this indicates that the quota has been set on the directory) is not healed to the new brick. Can you please confirm whether this behavior observed ONLY after the add-brick? as can be seen from the below mentioned logs, that after add-brick the df and quota list are showing same result, given the fact that quota-deem-statfs is on, [root@quota1 ~]# gluster volume add-brick dist-rep5 10.70.42.186:/rhs/brick1/d1r15-add 10.70.43.181:/rhs/brick1/d1r25-add volume add-brick: success [root@quota1 ~]# gluster volume info dist-rep5 Volume Name: dist-rep5 Type: Distributed-Replicate Volume ID: 48279380-8783-4cc6-812b-916db4a56fe7 Status: Started Number of Bricks: 7 x 2 = 14 Transport-type: tcp Bricks: Brick1: 10.70.42.186:/rhs/brick1/d1r15 Brick2: 10.70.43.181:/rhs/brick1/d1r25 Brick3: 10.70.43.18:/rhs/brick1/d2r15 Brick4: 10.70.43.22:/rhs/brick1/d2r25 Brick5: 10.70.42.186:/rhs/brick1/d3r15 Brick6: 10.70.43.181:/rhs/brick1/d3r25 Brick7: 10.70.43.18:/rhs/brick1/d4r15 Brick8: 10.70.43.22:/rhs/brick1/d4r25 Brick9: 10.70.42.186:/rhs/brick1/d5r15 Brick10: 10.70.43.181:/rhs/brick1/d5r25 Brick11: 10.70.43.18:/rhs/brick1/d6r15 Brick12: 10.70.43.22:/rhs/brick1/d6r25 Brick13: 10.70.42.186:/rhs/brick1/d1r15-add Brick14: 10.70.43.181:/rhs/brick1/d1r25-add Options Reconfigured: features.quota-deem-statfs: on features.quota: on [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_quota1-lv_root 44G 2.5G 40G 6% / tmpfs 4.9G 0 4.9G 0% /dev/shm /dev/vda1 485M 32M 428M 7% /boot /dev/mapper/RHS_vgvdb-RHS_lv1 421G 411G 9.9G 98% /rhs/brick1 localhost:dist-rep5 25G 25G 0 100% /var/run/gluster/dist-rep5 [root@quota1 ~]# [root@quota1 ~]# [root@quota1 ~]# gluster volume quota dist-rep5 list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir 5.0GB 80% 5.0GB 0Bytes / 25.0GB 80% 25.0GB 0Bytes verified on glusterfs-3.4.0.38rhs-1 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html |