Bug 1019806 - Quota + Rebalance : "Disk quota exceeded" warning message seen on file creation even after quota is disabled
Quota + Rebalance : "Disk quota exceeded" warning message seen on file creati...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: krishnan parthasarathi
senaik
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-16 08:48 EDT by senaik
Modified: 2015-11-03 18:05 EST (History)
10 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.40rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-27 10:42:17 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description senaik 2013-10-16 08:48:09 EDT
Description of problem:
====================== 
"Disk quota exceeded" warning message seen on file creation even after quota is disabled

Version-Release number of selected component (if applicable):
============================================================
glusterfs 3.4.0.35rhs


How reproducible:
=================
Faced it once till now


Steps to Reproduce:
===================
1.Create a distribute volume with 3 bricks and start it 

2.Enable quota on the volume 
[root@boost brick1]# gluster volume quota VOL1 enable 
volume quota : success

3. Create some files 
for i in {1..95} ; do dd if=/dev/urandom of=z"$i" bs=10M count=1; done

4.Peer probe another node 
[root@boost brick1]# gluster peer probe 10.70.34.89
peer probe: success. 

5.Add brick from the newly added node 
 gluster v add-brick VOL1 10.70.34.89:/rhs/brick1/ab4
volume add-brick: success

gluster v i VOL1
 
Volume Name: VOL1
Type: Distribute
Volume ID: 6fb5ec85-0d81-4c26-90f3-84cc48bd653f
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.34.86:/rhs/brick1/ab1
Brick2: 10.70.34.85:/rhs/brick1/ab2
Brick3: 10.70.34.88:/rhs/brick1/ab3
Brick4: 10.70.34.89:/rhs/brick1/ab4
Options Reconfigured:
features.quota: on

5. Limit usage on /
[root@boost brick1]# gluster volume quota VOL1 limit-usage / 1GB
volume quota : success

6. Start rebalance and check rebalance status 

7. Check quota list 
 gluster v quota VOL1 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%     950.0MB  74.0MB

8. Create some more files such that quota limit is exceeded 

After reaching the limit , got following message 

10485760 bytes (10 MB) copied, 1.08936 s, 9.6 MB/s
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 1.14267 s, 9.2 MB/s
dd: writing `z103': Disk quota exceeded
dd: closing output file `z103': Disk quota exceeded
dd: opening `z104': Disk quota exceeded
dd: writing `z105': Disk quota exceeded
dd: closing output file `z105': Disk quota exceeded
dd: opening `z106': Disk quota exceeded

gluster v quota VOL1 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%       1.0GB  0Bytes


9. Stopped the volume and started the volume and tried creating files , got message that "Disk Quota exceeded" 

10.Disabled Quota 

 gluster v quota VOL1 disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success

[root@boost brick1]# gluster v quota VOL1 list
quota command failed : Quota is not enabled on volume VOL1


[root@boost brick1]# gluster v i VOL1
Volume Name: VOL1
Type: Distribute
Volume ID: 6fb5ec85-0d81-4c26-90f3-84cc48bd653f
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.34.86:/rhs/brick1/ab1
Brick2: 10.70.34.85:/rhs/brick1/ab2
Brick3: 10.70.34.88:/rhs/brick1/ab3
Brick4: 10.70.34.89:/rhs/brick1/ab4
Options Reconfigured:
features.quota: off

11. Tried to create files now on the client . Few files got created , but in between got message that "Disk Quota exceeded" for few files 


[root@dhcp-0-180 VOL1]# for i in {201..250} ; do dd if=/dev/urandom of=z"$i" bs=10M count=1; done
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 1.07011 s, 9.8 MB/s
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 1.0678 s, 9.8 MB/s
dd: opening `z203': Disk quota exceeded
1+0 records in
1+0 records out
.
.
.
10485760 bytes (10 MB) copied, 1.07488 s, 9.8 MB/s
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 1.07218 s, 9.8 MB/s
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 1.06797 s, 9.8 MB/s
dd: opening `z218': Disk quota exceeded
dd: opening `z219': Disk quota exceeded
dd: opening `z220': Disk quota exceeded
1+0 records in
1+0 records out

[root@dhcp-0-180 VOL1]# du -sh
1.2G	

Actual results:
===============
As of now , when quota is disabled , and when we try to create files , we get the warning message that quota has been exceeded 

Expected results:
=================
After quota is disabled , we should be able to create files and there should not be any warning message that "Disk quota is exceeded"


Additional info:
================
getfattr -d -m . -e hex /rhs/brick1/ab1/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/ab1/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000bffffffdffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000000d200000
trusted.glusterfs.volume-id=0x6fb5ec850d814c2690f384cc48bd653f


getfattr -d -m . -e hex /rhs/brick1/ab2/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/ab2/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000003ffffffe
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000013aa0000
trusted.glusterfs.volume-id=0x6fb5ec850d814c2690f384cc48bd653f


getfattr -d -m . -e hex /rhs/brick1/ab3/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/ab3/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000007ffffffebffffffc
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000012200000
trusted.glusterfs.volume-id=0x6fb5ec850d814c2690f384cc48bd653f


getfattr -d -m . -e hex /rhs/brick1/ab4/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/ab4/
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000003fffffff7ffffffd
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x000000000d200000
trusted.glusterfs.volume-id=0x6fb5ec850d814c2690f384cc48bd653f
Comment 2 Gowrishankar Rajaiyan 2013-10-17 02:53:41 EDT
Per bug triage 10/17.

need workaround if not fixed.
Comment 3 Susant Kumar Palai 2013-11-11 08:50:36 EST
Followed the above steps and and couldn't reproduce the issue any more.
Comment 4 Vivek Agarwal 2013-11-14 06:26:14 EST
Moving the known issues to Doc team, to be documented in release notes for U1
Comment 5 Vivek Agarwal 2013-11-14 06:28:27 EST
Moving the known issues to Doc team, to be documented in release notes for U1
Comment 7 senaik 2013-11-15 06:31:13 EST
Version : 3.4.0.44rhs

Repeated the Steps as mentioned in Steps to reproduce , did not face the issue again . 

Marking the bug as verified
Comment 8 errata-xmlrpc 2013-11-27 10:42:17 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html

Note You need to log in before you can comment on or make changes to this bug.