Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 821725

Summary: quota: brick process kill allows quota limit cross
Product: [Community] GlusterFS Reporter: Saurabh <saujain>
Component: quotaAssignee: vpshastry <vshastry>
Status: CLOSED WONTFIX QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: pre-releaseCC: amarts, gluster-bugs, mzywusko, nsathyan, vbellur
Target Milestone: ---Keywords: FutureFeature, Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Cause: when quota limit is set on a distributed volume, and if a brick goes down while I/O is happening, there is a chance that the effective 'quota limit' can be exceeded as distribute translator would not see the contribution from the offline brick. Consequence: 'quota limit' gets exceeded. Workaround (if any): use 'replication' in case one needs 100% consistency when a node goes down. Result: When replicate is used, it would take care of single brick failure, and quota limit will be maintained as is.
Story Points: ---
Clone Of:
: 848253 (view as bug list) Environment:
Last Closed: 2013-02-04 11:12:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 848253    

Description Saurabh 2012-05-15 12:45:49 UTC
Description of problem:
volume type: distribute-replicate(2x2)
number of nodes: 2
[root@RHS-71 ~]# gluster volume status dist-rep-quota
Status of volume: dist-rep-quota
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 172.17.251.71:/export/dr-q			24017	Y	4078
Brick 172.17.251.72:/export/drr-q			24010	Y	3183
Brick 172.17.251.71:/export/ddr-q			24018	Y	3252
Brick 172.17.251.72:/export/ddrr-q			24011	Y	3189
NFS Server on localhost					38467	Y	3965
Self-heal Daemon on localhost				N/A	Y	3942
NFS Server on 172.17.251.74				38467	Y	32306
Self-heal Daemon on 172.17.251.74			N/A	Y	32292
NFS Server on 172.17.251.73				38467	Y	6940
Self-heal Daemon on 172.17.251.73			N/A	Y	6926
NFS Server on 172.17.251.72				38467	Y	3475
Self-heal Daemon on 172.17.251.72			N/A	Y	3461

the problem happens when that quota limit is getting crossed when on one of the brick is brought down 

Version-Release number of selected component (if applicable):
3.3.0qa40


How reproducible:
always

Steps to Reproduce:
1. put a limit of 2GB on the root of the volume

2. from nfs mount try to add data inside a directory upto to 1GB in size.(many files)

3. now kill one of the process using "kill <pid>", in this case brick brought down was for "172.17.251.71:/export/dr-q"

4. from nfs mount still try to keep adding data

  
Actual results:
the data is allowed get added and cross the limit

Expected results:
the quota limit should still be honored

Additional info:
even after bringing the brick process back the data addition was successfully happening, until self-heal is not triggered using "find . | xargs stat" over nfs mount.

Comment 1 Amar Tumballi 2013-02-04 11:12:27 UTC
Need to have an extra flag set on the directory when quota limit is reached on a directory.

Extremely hard to keep the information about the lost brick for quota computation, in non-replicated setup. But we need an enhancement to handle quota limit set flag in xattr when its ~95% of limit value. That way, we will not be missing quota limit by a large margin.

This issue should just be marked as Known-Issues, and be handled by setting the 'right expectation' in admin, than that of technical solution, which would make the performance crawl, and solution never satisfying for 100% userbase.

If any one thinks this feature is a must to use gluster quota (after reading known issues section), please re-open the bug.