Bug 1304585
Summary: | quota: disabling and enabling quota in a quick interval removes quota's limit usage settings on multiple directories | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | krishnaram Karthick <kramdoss> |
Component: | quota | Assignee: | Manikandan <mselvaga> |
Status: | CLOSED ERRATA | QA Contact: | Anil Shah <ashah> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | asrivast, byarlaga, mselvaga, rhinduja, rhs-bugs, sankarshan, smohan, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.1.3 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.7.9-2 | Doc Type: | Bug Fix |
Doc Text: |
When quota is disabled on a volume, a cleanup process is initiated to clean up the extended attributes used by quota. If this cleanup process is still in progress when quota is re-enabled, extended attributes for the newly enabled quota can be removed by the cleanup process. This has negative effects on quota accounting.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2016-06-23 05:06:44 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1268895, 1299184, 1310091 |
Description
krishnaram Karthick
2016-02-04 03:21:49 UTC
pasting the gluster v info Volume Name: krk-vol Type: Tier Volume ID: 192655ce-4ef6-4ada-8e0c-6f137e2721e1 Status: Started Number of Bricks: 36 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 6 x 2 = 12 Brick1: 10.70.37.101:/rhs/brick6/krkvol Brick2: 10.70.35.163:/rhs/brick6/krkvol Brick3: 10.70.35.173:/rhs/brick6/krkvol Brick4: 10.70.35.232:/rhs/brick6/krkvol Brick5: 10.70.35.176:/rhs/brick6/krkvol Brick6: 10.70.35.231:/rhs/brick6/krkvol Brick7: 10.70.35.44:/rhs/brick6/krkvol Brick8: 10.70.37.195:/rhs/brick6/krkvol Brick9: 10.70.37.202:/rhs/brick6/krkvol Brick10: 10.70.37.120:/rhs/brick6/krkvol Brick11: 10.70.37.60:/rhs/brick6/krkvol Brick12: 10.70.37.69:/rhs/brick6/krkvol Cold Tier: Cold Tier Type : Distributed-Disperse Number of Bricks: 2 x (8 + 4) = 24 Brick13: 10.70.35.176:/rhs/brick5/krkvol Brick14: 10.70.35.232:/rhs/brick5/krkvol Brick15: 10.70.35.173:/rhs/brick5/krkvol Brick16: 10.70.35.163:/rhs/brick5/krkvol Brick17: 10.70.37.101:/rhs/brick5/krkvol Brick18: 10.70.37.69:/rhs/brick5/krkvol Brick19: 10.70.37.60:/rhs/brick5/krkvol Brick20: 10.70.37.120:/rhs/brick5/krkvol Brick21: 10.70.37.202:/rhs/brick4/krkvol Brick22: 10.70.37.195:/rhs/brick4/krkvol Brick23: 10.70.35.155:/rhs/brick4/krkvol Brick24: 10.70.35.222:/rhs/brick4/krkvol Brick25: 10.70.35.108:/rhs/brick4/krkvol Brick26: 10.70.35.44:/rhs/brick4/krkvol Brick27: 10.70.35.89:/rhs/brick4/krkvol Brick28: 10.70.35.231:/rhs/brick4/krkvol Brick29: 10.70.35.176:/rhs/brick4/krkvol Brick30: 10.70.35.232:/rhs/brick4/krkvol Brick31: 10.70.35.173:/rhs/brick4/krkvol Brick32: 10.70.35.163:/rhs/brick4/krkvol Brick33: 10.70.37.101:/rhs/brick4/krkvol Brick34: 10.70.37.69:/rhs/brick4/krkvol Brick35: 10.70.37.60:/rhs/brick4/krkvol Brick36: 10.70.37.120:/rhs/brick4/krkvol Options Reconfigured: features.quota-deem-statfs: on diagnostics.brick-log-level: INFO cluster.tier-demote-frequency: 300 cluster.watermark-hi: 80 cluster.watermark-low: 50 performance.readdir-ahead: on features.quota: on features.inode-quota: on performance.io-cache: off performance.read-ahead: off performance.open-behind: off performance.write-behind: off cluster.min-free-disk: 20 features.record-counters: on cluster.write-freq-threshold: 1 cluster.read-freq-threshold: 1 cluster.tier-max-files: 10000 diagnostics.client-log-level: INFO features.ctr-enabled: on cluster.tier-mode: cache features.uss: on Updated steps to reproduce: 1. Create a volume, enable quota and set limit usage to root directory 2. Attach tier and have files promoted, new files written to hot tier 3. create multiple sub-directories and set limit usage 4. Have data written to these sub-directories 5. Disable quota 6. Enable quota 7. Repeat steps 3 & 4 8. Detach tier and wait for it to complete As per comment# 4, it looks like quota is enabled before the quota cleanup is complete. If the quota cleanup is still in progress and quota is enabled again, then a cleanup process can remove newly accounted quota xattrs and mess up the accounting. This issue has been fixed in upstream: http://review.gluster.org/#/c/13065/ This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions Fix "http://review.gluster.org/#/c/13065/" is available in 3.1.3 as part of rebase [root@dhcp46-4 bricks]# gluster v quota testvol list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------------------------------------------------------------- / 20.0GB 80%(16.0GB) 10.0GB 10.0GB No No /test/test1/test2/test3/test4/test5/test6/test7 2.0GB 80%(1.6GB) 2.0GB 0Bytes Yes Yes /test/test1/test2/test3/test4/test5/test6/client2 2.0GB 80%(1.6GB) 2.0GB 0Bytes Yes Yes /test/test1/test2/test3/test4/test5/test6/client3 2.0GB 80%(1.6GB) 2.0GB 0Bytes Yes Yes /test/test1/test2/test3/test4/test5/test6/client4 2.0GB 80%(1.6GB) 2.0GB 0Bytes Yes Yes /test/test1/test2/test3/test4/test5/test6/client5 2.0GB 80%(1.6GB) 2.0GB 0Bytes Yes Yes [root@dhcp46-4 bricks]# gluster v quota testvol disable Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y volume quota : success [root@dhcp46-4 bricks]# gluster v quota testvol enable volume quota : success [root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/test7 2GB volume quota : success [root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/client2 2GB volume quota : success [root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/client3 2GB volume quota : success [root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/client4 2GB volume quota : success [root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/client5 2GB volume quota : success [root@dhcp46-4 bricks]# gluster v quota testvol limit-usage / 20GBvolume quota : success [root@dhcp46-4 bricks]# gluster v detach-tier testvol start volume detach-tier start: success ID: 28f9e04b-d2a6-4c6a-8985-8370ad4b89f2 [root@dhcp46-4 bricks]# gluster v detach-tier testvol commit Removing tier can result in data loss. Do you want to Continue? (y/n) y gluster v quota testvol listvolume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. [root@dhcp46-4 bricks]# gluster v quota testvol list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? ------------------------------------------------------------------------------------------------------------------------------- /test/test1/test2/test3/test4/test5/test6/test7 2.0GB 80%(1.6GB) 0Bytes 2.0GB No No /test/test1/test2/test3/test4/test5/test6/client2 2.0GB 80%(1.6GB) 0Bytes 2.0GB No No /test/test1/test2/test3/test4/test5/test6/client3 2.0GB 80%(1.6GB) 0Bytes 2.0GB No No /test/test1/test2/test3/test4/test5/test6/client4 2.0GB 80%(1.6GB) 0Bytes 2.0GB No No /test/test1/test2/test3/test4/test5/test6/client5 2.0GB 80%(1.6GB) 0Bytes 2.0GB No No / 20.0GB 80%(16.0GB) 43.0KB 20.0GB No No Bug verified on build glusterfs-3.7.9-2.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |