Bug 821725 - quota: brick process kill allows quota limit cross
quota: brick process kill allows quota limit cross
Product: GlusterFS
Classification: Community
Component: quota (Show other bugs)
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: vpshastry
: FutureFeature, Triaged
Depends On:
Blocks: 848253
  Show dependency treegraph
Reported: 2012-05-15 08:45 EDT by Saurabh
Modified: 2016-01-19 01:10 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
Cause: when quota limit is set on a distributed volume, and if a brick goes down while I/O is happening, there is a chance that the effective 'quota limit' can be exceeded as distribute translator would not see the contribution from the offline brick. Consequence: 'quota limit' gets exceeded. Workaround (if any): use 'replication' in case one needs 100% consistency when a node goes down. Result: When replicate is used, it would take care of single brick failure, and quota limit will be maintained as is.
Story Points: ---
Clone Of:
: 848253 (view as bug list)
Last Closed: 2013-02-04 06:12:27 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Saurabh 2012-05-15 08:45:49 EDT
Description of problem:
volume type: distribute-replicate(2x2)
number of nodes: 2
[root@RHS-71 ~]# gluster volume status dist-rep-quota
Status of volume: dist-rep-quota
Gluster process						Port	Online	Pid
Brick			24017	Y	4078
Brick			24010	Y	3183
Brick			24018	Y	3252
Brick			24011	Y	3189
NFS Server on localhost					38467	Y	3965
Self-heal Daemon on localhost				N/A	Y	3942
NFS Server on				38467	Y	32306
Self-heal Daemon on			N/A	Y	32292
NFS Server on				38467	Y	6940
Self-heal Daemon on			N/A	Y	6926
NFS Server on				38467	Y	3475
Self-heal Daemon on			N/A	Y	3461

the problem happens when that quota limit is getting crossed when on one of the brick is brought down 

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. put a limit of 2GB on the root of the volume

2. from nfs mount try to add data inside a directory upto to 1GB in size.(many files)

3. now kill one of the process using "kill <pid>", in this case brick brought down was for ""

4. from nfs mount still try to keep adding data

Actual results:
the data is allowed get added and cross the limit

Expected results:
the quota limit should still be honored

Additional info:
even after bringing the brick process back the data addition was successfully happening, until self-heal is not triggered using "find . | xargs stat" over nfs mount.
Comment 1 Amar Tumballi 2013-02-04 06:12:27 EST
Need to have an extra flag set on the directory when quota limit is reached on a directory.

Extremely hard to keep the information about the lost brick for quota computation, in non-replicated setup. But we need an enhancement to handle quota limit set flag in xattr when its ~95% of limit value. That way, we will not be missing quota limit by a large margin.

This issue should just be marked as Known-Issues, and be handled by setting the 'right expectation' in admin, than that of technical solution, which would make the performance crawl, and solution never satisfying for 100% userbase.

If any one thinks this feature is a must to use gluster quota (after reading known issues section), please re-open the bug.

Note You need to log in before you can comment on or make changes to this bug.