Bug 1304585 - quota: disabling and enabling quota in a quick interval removes quota's limit usage settings on multiple directories
Summary: quota: disabling and enabling quota in a quick interval removes quota's limit...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Manikandan
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1268895 1299184 1310091
TreeView+ depends on / blocked
 
Reported: 2016-02-04 03:21 UTC by krishnaram Karthick
Modified: 2016-09-17 15:26 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.9-2
Doc Type: Bug Fix
Doc Text:
When quota is disabled on a volume, a cleanup process is initiated to clean up the extended attributes used by quota. If this cleanup process is still in progress when quota is re-enabled, extended attributes for the newly enabled quota can be removed by the cleanup process. This has negative effects on quota accounting.
Clone Of:
Environment:
Last Closed: 2016-06-23 05:06:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description krishnaram Karthick 2016-02-04 03:21:49 UTC
Description of problem:
On a 16 node system with quota enabled and quota's limit usage set on multiple subdirectories of tiered volume, running detach tier removes limit usage settings on multiple sub-directories.

<<<<<<<<<<<< Before detach tier >>>>>>>>>>>

gluster v quota krk-vol list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                        100.0PB     80%(80.0PB)    1.6TB 100.0PB              No                   No
/client-1                                 10.0GB     80%(8.0GB)    2.3GB   7.7GB              No                   No
/client-3                                 10.0GB     80%(8.0GB)    2.6GB   7.4GB              No                   No
/client-4                                 10.0GB     80%(8.0GB)    2.4GB   7.6GB              No                   No
/client-5                                 10.0GB     80%(8.0GB)    9.8GB 240.0MB             Yes                   No
/client-6                                 10.0GB     80%(8.0GB)    9.8GB 240.0MB             Yes                   No
/client-7                                 10.0GB     80%(8.0GB)    9.8GB 240.0MB             Yes                   No
/client-8                                 10.0GB     80%(8.0GB)   17.8GB  0Bytes             Yes                  Yes
/multiple-dire/level2/level3/level4/level5/level6/client-1  10.0GB     80%(8.0GB) 1000.0MB   9.0GB              No                   No
/multiple-dire/level2/level3/level4/level5/level6/client-2  10.0GB     80%(8.0GB) 1000.0MB   9.0GB              No                   No
/multiple-dire/level2/level3/level4/level5/level6/client-3  10.0GB     80%(8.0GB) 1000.0MB   9.0GB              No                   No
/multiple-dire/level2/level3/level4/level5/level6/client-4  10.0GB     80%(8.0GB) 1000.0MB   9.0GB              No                   No
/multiple-dire/level2/level3/level4/level5/level6/client-5  10.0GB     80%(8.0GB)  892.0MB   9.1GB              No                   No
/multiple-dire/level2/level3/level4/level5/level6/client-6  10.0GB     80%(8.0GB)    1.5GB   8.5GB              No                   No
/multiple-dire/level2/level3/level4/level5/level6/client-7  10.0GB     80%(8.0GB)   86.0MB   9.9GB              No                   No
/multiple-dire/level2/level3/level4/level5/level6/client-8  10.0GB     80%(8.0GB)   73.0MB   9.9GB              No                   No


<<<<<<<<<<<<<After running detach-tier>>>>>>>>>>>

gluster v quota krk-vol list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                        100.0PB     80%(80.0PB)    9.4TB 100.0PB              No                   No
/client-1                                 10.0GB     80%(8.0GB)   10.0GB  0Bytes             Yes                  Yes
/client-3                                 10.0GB     80%(8.0GB)   10.0GB  0Bytes             Yes                  Yes
/client-4                                 10.0GB     80%(8.0GB)    9.8GB 240.0MB             Yes                   No

Version-Release number of selected component (if applicable):
glusterfs-server-3.7.5-18.el7rhgs.x86_64

How reproducible:
Yet to be determined

Steps to Reproduce:
1. Create a volume, enable quota and set limit usage to root directory
2. Attach tier and have files promoted, new files written to hot tier
3. create multiple sub-directories and set limit usage
4. Have data written to these sub-directories
5. Detach tier and wait for it to complete

Actual results:
quota settings on most of the sub-directories are removed

Expected results:
quota settings should remain intact

Additional info:
sosreports shall be attached shortly

Comment 3 krishnaram Karthick 2016-02-04 04:12:50 UTC
pasting the gluster v info

Volume Name: krk-vol
Type: Tier
Volume ID: 192655ce-4ef6-4ada-8e0c-6f137e2721e1
Status: Started
Number of Bricks: 36
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 6 x 2 = 12
Brick1: 10.70.37.101:/rhs/brick6/krkvol
Brick2: 10.70.35.163:/rhs/brick6/krkvol
Brick3: 10.70.35.173:/rhs/brick6/krkvol
Brick4: 10.70.35.232:/rhs/brick6/krkvol
Brick5: 10.70.35.176:/rhs/brick6/krkvol
Brick6: 10.70.35.231:/rhs/brick6/krkvol
Brick7: 10.70.35.44:/rhs/brick6/krkvol
Brick8: 10.70.37.195:/rhs/brick6/krkvol
Brick9: 10.70.37.202:/rhs/brick6/krkvol
Brick10: 10.70.37.120:/rhs/brick6/krkvol
Brick11: 10.70.37.60:/rhs/brick6/krkvol
Brick12: 10.70.37.69:/rhs/brick6/krkvol

Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (8 + 4) = 24
Brick13: 10.70.35.176:/rhs/brick5/krkvol
Brick14: 10.70.35.232:/rhs/brick5/krkvol
Brick15: 10.70.35.173:/rhs/brick5/krkvol
Brick16: 10.70.35.163:/rhs/brick5/krkvol
Brick17: 10.70.37.101:/rhs/brick5/krkvol
Brick18: 10.70.37.69:/rhs/brick5/krkvol
Brick19: 10.70.37.60:/rhs/brick5/krkvol
Brick20: 10.70.37.120:/rhs/brick5/krkvol
Brick21: 10.70.37.202:/rhs/brick4/krkvol
Brick22: 10.70.37.195:/rhs/brick4/krkvol
Brick23: 10.70.35.155:/rhs/brick4/krkvol
Brick24: 10.70.35.222:/rhs/brick4/krkvol
Brick25: 10.70.35.108:/rhs/brick4/krkvol
Brick26: 10.70.35.44:/rhs/brick4/krkvol
Brick27: 10.70.35.89:/rhs/brick4/krkvol
Brick28: 10.70.35.231:/rhs/brick4/krkvol
Brick29: 10.70.35.176:/rhs/brick4/krkvol
Brick30: 10.70.35.232:/rhs/brick4/krkvol
Brick31: 10.70.35.173:/rhs/brick4/krkvol
Brick32: 10.70.35.163:/rhs/brick4/krkvol
Brick33: 10.70.37.101:/rhs/brick4/krkvol
Brick34: 10.70.37.69:/rhs/brick4/krkvol
Brick35: 10.70.37.60:/rhs/brick4/krkvol
Brick36: 10.70.37.120:/rhs/brick4/krkvol

Options Reconfigured:
features.quota-deem-statfs: on
diagnostics.brick-log-level: INFO
cluster.tier-demote-frequency: 300
cluster.watermark-hi: 80
cluster.watermark-low: 50
performance.readdir-ahead: on
features.quota: on
features.inode-quota: on
performance.io-cache: off
performance.read-ahead: off
performance.open-behind: off
performance.write-behind: off
cluster.min-free-disk: 20
features.record-counters: on
cluster.write-freq-threshold: 1
cluster.read-freq-threshold: 1
cluster.tier-max-files: 10000
diagnostics.client-log-level: INFO
features.ctr-enabled: on
cluster.tier-mode: cache
features.uss: on

Comment 4 krishnaram Karthick 2016-02-04 10:41:49 UTC
Updated steps to reproduce:

1. Create a volume, enable quota and set limit usage to root directory
2. Attach tier and have files promoted, new files written to hot tier
3. create multiple sub-directories and set limit usage
4. Have data written to these sub-directories
5. Disable quota
6. Enable quota
7. Repeat steps 3 & 4
8. Detach tier and wait for it to complete

Comment 5 Vijaikumar Mallikarjuna 2016-02-04 11:21:40 UTC
As per comment# 4, it looks like quota is enabled before the quota cleanup is complete.
If the quota cleanup is still in progress and quota is enabled again, then a cleanup process can remove newly accounted quota xattrs and mess up the accounting.

This issue has been fixed in upstream: http://review.gluster.org/#/c/13065/

Comment 9 Mike McCune 2016-03-28 23:44:20 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 10 Vijaikumar Mallikarjuna 2016-04-07 03:11:15 UTC
Fix "http://review.gluster.org/#/c/13065/" is available in 3.1.3 as part of rebase

Comment 12 Anil Shah 2016-04-27 10:02:56 UTC
[root@dhcp46-4 bricks]# gluster v quota testvol list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                         20.0GB     80%(16.0GB)   10.0GB  10.0GB              No                   No
/test/test1/test2/test3/test4/test5/test6/test7   2.0GB     80%(1.6GB)    2.0GB  0Bytes             Yes                  Yes
/test/test1/test2/test3/test4/test5/test6/client2   2.0GB     80%(1.6GB)    2.0GB  0Bytes             Yes                  Yes
/test/test1/test2/test3/test4/test5/test6/client3   2.0GB     80%(1.6GB)    2.0GB  0Bytes             Yes                  Yes
/test/test1/test2/test3/test4/test5/test6/client4   2.0GB     80%(1.6GB)    2.0GB  0Bytes             Yes                  Yes
/test/test1/test2/test3/test4/test5/test6/client5   2.0GB     80%(1.6GB)    2.0GB  0Bytes             Yes                  Yes
[root@dhcp46-4 bricks]# gluster v quota testvol disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success
[root@dhcp46-4 bricks]# gluster v quota testvol enable
volume quota : success
[root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/test7  2GB
volume quota : success
[root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/client2 2GB
volume quota : success
[root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/client3  2GB
volume quota : success
[root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/client4  2GB
volume quota : success
[root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /test/test1/test2/test3/test4/test5/test6/client5  2GB
volume quota : success
[root@dhcp46-4 bricks]# gluster v quota testvol limit-usage /  20GBvolume quota : success
[root@dhcp46-4 bricks]# gluster v detach-tier testvol  start
volume detach-tier start: success
ID: 28f9e04b-d2a6-4c6a-8985-8370ad4b89f2
[root@dhcp46-4 bricks]# gluster v detach-tier testvol  commit
Removing tier can result in data loss. Do you want to Continue? (y/n) y
gluster v quota testvol listvolume detach-tier commit: success
Check the detached bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 
[root@dhcp46-4 bricks]# gluster v quota testvol list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/test/test1/test2/test3/test4/test5/test6/test7   2.0GB     80%(1.6GB)   0Bytes   2.0GB              No                   No
/test/test1/test2/test3/test4/test5/test6/client2   2.0GB     80%(1.6GB)   0Bytes   2.0GB              No                   No
/test/test1/test2/test3/test4/test5/test6/client3   2.0GB     80%(1.6GB)   0Bytes   2.0GB              No                   No
/test/test1/test2/test3/test4/test5/test6/client4   2.0GB     80%(1.6GB)   0Bytes   2.0GB              No                   No
/test/test1/test2/test3/test4/test5/test6/client5   2.0GB     80%(1.6GB)   0Bytes   2.0GB              No                   No
/                                         20.0GB     80%(16.0GB)   43.0KB  20.0GB              No                   No


Bug verified on build glusterfs-3.7.9-2.el7rhgs.x86_64

Comment 15 errata-xmlrpc 2016-06-23 05:06:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.