| Summary: | Quota: Client inherits limit-usage value after disabling and enabling quota. | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Gowrishankar Rajaiyan <grajaiya> |
| Component: | glusterd | Assignee: | vpshastry <vshastry> |
| Status: | CLOSED ERRATA | QA Contact: | Saurabh <saujain> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 2.1 | CC: | grajaiya, kaushal, kparthas, mzywusko, nsathyan, psriniva, sdharane, shaines, shmohan, vagarwal, vbellur, vshastry |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.4.0.38rhs-1 | Doc Type: | Bug Fix |
| Doc Text: |
Quota stores xattrs for its accounting and enforcement logic. The xattrs should be cleared after disabling the quota. Since the xattrs are not cleared, the stale data causes quota limit violation only after subsequently enabling quota.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-11-27 15:41:17 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | |||
| Bug Blocks: | 1000936, 1002961 | ||
|
Description
Gowrishankar Rajaiyan
2013-10-07 10:27:43 UTC
RHS: [root@ninja vmstore]# gluster volume quota vmstore limit-usage / 40GB volume quota : success [root@ninja vmstore]# [root@ninja vmstore]# gluster vol info vmstore Volume Name: vmstore Type: Replicate Volume ID: c96de15d-024e-416d-a1c5-ff5fef44b25b Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.70.34.68:/rhs1/vmstore Brick2: 10.70.34.56:/rhs1/vmstore Options Reconfigured: features.quota-deem-statfs: on features.quota: on storage.owner-gid: 107 storage.owner-uid: 107 network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off [root@ninja vmstore]# Client: [root@rhs-client37 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol01 3.6T 2.1G 3.4T 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 485M 40M 421M 9% /boot ninja.lab.eng.blr.redhat.com:vmstore 40G 12G 29G 30% /var/lib/libvirt/images [root@rhs-client37 ~]# RHS: [root@ninja vmstore]# gluster volume quota vmstore disable Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y volume quota : success [root@ninja vmstore]# Client: [root@rhs-client37 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol01 3.6T 2.1G 3.4T 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 485M 40M 421M 9% /boot ninja.lab.eng.blr.redhat.com:vmstore 195G 14G 181G 7% /var/lib/libvirt/images [root@rhs-client37 ~]# RHS: [root@ninja vmstore]# gluster volume quota vmstore enable volume quota : success [root@ninja vmstore]# gluster volume quota vmstore list quota: No quota configured on volume vmstore [root@ninja vmstore]# Client: [root@rhs-client37 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol01 3.6T 2.1G 3.4T 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 485M 40M 421M 9% /boot ninja.lab.eng.blr.redhat.com:vmstore 40G 12G 29G 30% /var/lib/libvirt/images [root@rhs-client37 ~]# Shanks, The quota limit configuration is stored in extended attributes of the directory on which quota is set. We don't remove the quota configuration extended attributes on quota-disable, which is why you are seeing "df -h", whose behaviour is defined by the presence of quota extended attributes and "deem-statfs" value set on the volume alone, display statistics as though quota was enabled. In the current implementation, targeted for Big Bend, we have decided not to clean up the quota limits extended attributes, by crawling the entire volume, on quota-disable. I think we need to revisit targeting a fix for this in Update 1. Per bug triage 10/17. need workaround if not fixed. *** Bug 1002961 has been marked as a duplicate of this bug. *** [root@server1 ~]# gluster vol info shanks-quota
Volume Name: shanks-quota
Type: Distributed-Replicate
Volume ID: 95c6d6fc-2977-461b-b68e-347ad9e85b5c
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.43.3:/rhs/busy-files/quota1
Brick2: 10.70.43.199:/rhs/busy-files/quota1
Brick3: 10.70.43.156:/rhs/busy-files/quota1
Brick4: 10.70.43.1:/rhs/busy-files/quota1
Options Reconfigured:
features.quota-deem-statfs: on
features.quota: on
[root@server1 ~]# gluster vol quota shanks-quota list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 70.0GB 80% 0Bytes 70.0GB
[root@server1 ~]#
Client:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
42G 3.4G 36G 9% /
tmpfs 3.8G 72K 3.8G 1% /dev/shm
/dev/vda1 485M 83M 378M 18% /boot
10.70.43.3:/shanks-quota
70G 0 70G 0% /shanks-quota-fuse
[root@localhost ~]#
Server:
[root@server1 ~]# gluster vol quota shanks-quota disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success
[root@server1 ~]#
Client:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
42G 3.4G 36G 9% /
tmpfs 3.8G 72K 3.8G 1% /dev/shm
/dev/vda1 485M 83M 378M 18% /boot
10.70.43.3:/shanks-quota
200G 30G 171G 15% /shanks-quota-fuse
[root@localhost ~]#
Server:
[root@server1 ~]# gluster vol quota shanks-quota enable
volume quota : success
[root@server1 ~]#
Client:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
42G 3.4G 36G 9% /
tmpfs 3.8G 72K 3.8G 1% /dev/shm
/dev/vda1 485M 83M 378M 18% /boot
10.70.43.3:/shanks-quota
100G 26G 75G 26% /shanks-quota-fuse <<<<<<<<
[root@localhost ~]#
After disabling and enabling quota, the client shows the brick size from server1 which in this case is 100GB instead of the volume size which is 200GB (100GB per brick in distribute replicate). Quota on directory: [root@localhost shanks-quota2-fuse]# getfattr -m . -d -e hex test1 # file: test1 trusted.glusterfs.quota.limit-set=0x0000000c80000000ffffffffffffffff [root@localhost shanks-quota2-fuse]# After removing quota limit-usage from test2 [root@localhost shanks-quota2-fuse]# getfattr -m . -d -e hex test1 [root@localhost shanks-quota2-fuse]# Quota on Volume: [root@localhost shanks-quota2-fuse]# getfattr -m . -d -e hex /shanks-quota2-fuse getfattr: Removing leading '/' from absolute path names # file: shanks-quota2-fuse trusted.glusterfs.quota.limit-set=0x0000003200000000ffffffffffffffff trusted.glusterfs.volume-id=0x5bb529022c5043e2898e8d51d15e9cf6 [root@localhost shanks-quota2-fuse]# After disabling quota [root@localhost shanks-quota2-fuse]# getfattr -m . -d -e hex /shanks-quota2-fuse getfattr: Removing leading '/' from absolute path names # file: shanks-quota2-fuse trusted.glusterfs.volume-id=0x5bb529022c5043e2898e8d51d15e9cf6 [root@localhost shanks-quota2-fuse]# Note: It has been informed that admin needs to perform "[root@server1 ~]# ps ax | grep "quota-remove-xattr.sh" to ensure that the removal activity of xattr is complete. This information should be included as part of administration guide. (In reply to Gowrishankar Rajaiyan from comment #8) ... > Note: It has been informed that admin needs to perform "[root@server1 ~]# ps > ax | grep "quota-remove-xattr.sh" to ensure that the removal activity of > xattr is complete. This information should be included as part of > administration guide. Now covered in doc bug 1025792 One can determine if the quota xattr cleanup is complete, by checking if the script that is performing the xattr removal is still running. The location of the script is /usr/libexec/glusterfs/quota/quota-remove-xattr.sh ie., # ps ax | grep "quota-remove-xattr.sh" once you put quota-deem-statfs to off and disable quota after that on a volume
df -h display the information as intended, i.e. cumulative of bricks.
even after an enable of quota again, the df -h displays the information as cumulative of bricks.
as can be seen,
[root@quota1 ~]# gluster volume info dist-rep
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: 7560c789-c47e-4e18-9860-95956f4a7b1d
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r1
Brick2: 10.70.43.181:/rhs/brick1/d1r2
Brick3: 10.70.43.18:/rhs/brick1/d2r1
Brick4: 10.70.43.22:/rhs/brick1/d2r2
Brick5: 10.70.42.186:/rhs/brick1/d3r1
Brick6: 10.70.43.181:/rhs/brick1/d3r2
Brick7: 10.70.43.18:/rhs/brick1/d4r1
Brick8: 10.70.43.22:/rhs/brick1/d4r2
Brick9: 10.70.42.186:/rhs/brick1/d5r1
Brick10: 10.70.43.181:/rhs/brick1/d5r2
Brick11: 10.70.43.18:/rhs/brick1/d6r1
Brick12: 10.70.43.22:/rhs/brick1/d6r2
Options Reconfigured:
features.quota-deem-statfs: off
features.quota: off
[root@quota1 ~]#
[root@quota1 ~]#
[root@quota1 ~]# gluster volume quota dist-rep enable
volume quota : success
[root@rhsauto005 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
10.70.42.186:/dist-rep
2.2T 1.4T 791G 64% /mnt/nfs-test
10.70.42.186:/dist-rep
2.2T 1.4T 791G 64% /mnt/glusterfs-test-dist
Hence moving it to Verified.
there is a scenario where if you leave quota-deem-statfs on , subsequently do a disable and enable, given the fact one enables quota after the remove-xattr.sh is finished, then df -h "size" field displays the value of one brick not the cumulative volume of bricks. Hence for that we have filed BZ 1030432.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html |