Bug 764429 (GLUSTER-2697) - Quota: add-brick creates the size go awkward, though it was perfect earlier
Summary: Quota: add-brick creates the size go awkward, though it was perfect earlier
Keywords:
Status: CLOSED WORKSFORME
Alias: GLUSTER-2697
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: vpshastry
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 848245
TreeView+ depends on / blocked
 
Reported: 2011-04-08 14:15 UTC by Saurabh
Modified: 2014-08-11 23:22 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.4.0qa6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 848245 (view as bug list)
Environment:
Last Closed: 2013-02-04 10:52:26 UTC
Regression: ---
Mount Type: ---
Documentation: DP
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Saurabh 2011-04-08 14:15:45 UTC
Adding bricks to a distribute volume makes the size field give some other value,
though before add-brick the size displayed was fine.


[root@centos-qa-client-2 sbin]# ./gluster volume quota dist2 list
        path              limit_set          size
----------------------------------------------------------------------------------
/d1                     1048576              1044480
[root@centos-qa-client-2 sbin]# ./gluster volume add-brick
Usage: volume add-brick <VOLNAME> <NEW-BRICK> ...
[root@centos-qa-client-2 sbin]# ./gluster volume add-brick dist2 10.1.12.134:/mnt/dist-dist2 10.1.12.135:/mnt/dist-dist2
Add Brick successful
[root@centos-qa-client-2 sbin]# ./gluster volume quota dist2 list
        path              limit_set          size
----------------------------------------------------------------------------------
/d1                     1048576              1064960
[root@centos-qa-client-2 sbin]# ./gluster volume rebalance 
Usage: volume rebalance <VOLNAME> [fix-layout|migrate-data] {start|stop|status}
[root@centos-qa-client-2 sbin]# ./gluster volume rebalance dist2 start
starting rebalance on volume dist2 has been successful
[root@centos-qa-client-2 sbin]# ./gluster volume rebalance dist2 status
rebalance completed
[root@centos-qa-client-2 sbin]# ./gluster volume quota dist2 list
        path              limit_set          size
----------------------------------------------------------------------------------
/d1                     1048576              1064960
[root@centos-qa-client-2 sbin]# 


Note, no new file was after the add-brick


#############################
xattr, 


[root@centos-qa-client-3 sbin]# getfattr -m . -d -e hex /mnt/dist2/d
d1/ d2/ 
[root@centos-qa-client-3 sbin]# getfattr -m . -d -e hex /mnt/dist2/d1
getfattr: Removing leading '/' from absolute path names
# file: mnt/dist2/d1
trusted.gfid=0x1096b6da51ba4fd3b658d12e1bd7fb01
trusted.glusterfs.dht=0x0000000100000000bffffffdffffffff
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x000000000005f000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size=0x000000000005f000

[root@centos-qa-client-3 sbin]# getfattr -m . -d -e hex /mnt/dist-dist2/d
d1/ d2/ 
[root@centos-qa-client-3 sbin]# getfattr -m . -d -e hex /mnt/dist-dist2/d
d1/ d2/ 
[root@centos-qa-client-3 sbin]# getfattr -m . -d -e hex /mnt/dist-dist2/d1
getfattr: Removing leading '/' from absolute path names
# file: mnt/dist-dist2/d1
trusted.gfid=0xb3f82d6387584a738fb7f70109154501
trusted.glusterfs.dht=0x00000001000000003fffffff7ffffffd
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000000000000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size=0x0000000000000000

[root@centos-qa-client-3 sbin]# 




[root@centos-qa-client-2 sbin]# getfattr -m . -d -e hex /mnt/dist2/d1
getfattr: Removing leading '/' from absolute path names
# file: mnt/dist2/d1
trusted.gfid=0x1096b6da51ba4fd3b658d12e1bd7fb01
trusted.glusterfs.dht=0x00000001000000007ffffffebffffffc
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x00000000000a0000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size=0x00000000000a0000

[root@centos-qa-client-2 sbin]# getfattr -m . -d -e hex /mnt/dist-dist2/d
d1/ d2/ 
[root@centos-qa-client-2 sbin]# getfattr -m . -d -e hex /mnt/dist-dist2/d
d1/ d2/ 
[root@centos-qa-client-2 sbin]# getfattr -m . -d -e hex /mnt/dist-dist2/d1
getfattr: Removing leading '/' from absolute path names
# file: mnt/dist-dist2/d1
trusted.gfid=0xb3f82d6387584a738fb7f70109154501
trusted.glusterfs.dht=0x0000000100000000000000003ffffffe
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000000005000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size=0x0000000000005000

[root@centos-qa-client-2 sbin]#

Comment 1 Saurabh 2011-04-13 09:11:15 UTC
I am putting up one more observation over here,
add brick+rebalance is already showing issues,

but removing the added bricks and eventually deleting the all files from the volumes corrupts the xattr.

files remaining on a distribute environ after removing bricks, 

[root@centos-qa-client-1 g-dist4]# ls -R
.:
d1  d2

./d1:
f.10   f.11  f.18  f.21  f.27  f.31  f.4   f.42  f.45  f.47  f.5   f.54  f.58  f.61  f.64  f.77  f.79  f.85  f.89
f.100  f.12  f.19  f.25  f.29  f.34  f.41  f.43  f.46  f.48  f.53  f.55  f.6   f.62  f.69  f.78  f.8   f.88  f.99

./d2:

removing these files corrupts xattr, 

[root@centos-qa-client-1 g-dist4]# cd d1
[root@centos-qa-client-1 d1]# rm -rf *
[root@centos-qa-client-1 d1]# cd ..
[root@centos-qa-client-1 g-dist4]# ls -R
.:
d1  d2

./d1:

./d2:

#########server side ###################
[root@centos-qa-client-2 sbin]# getfattr -m . -d -e hex /mnt/dist4 | grep size
getfattr: Removing leading '/' from absolute path names
trusted.glusterfs.quota.size=0xffffffffffffc568
[root@centos-qa-client-2 sbin]# 


This exercise was done over fuse mount

Comment 2 Amar Tumballi 2011-04-18 13:30:26 UTC
I am working on this issue now. Anyways, as its not making it to 3.2.0 (considering last qa release has been made and fixing this may take ~2hrs -> many days).. will have to mark this as known issue for now.

Comment 3 Amar Tumballi 2011-04-18 14:14:34 UTC
Noticed something strange.

This volume was created with one brick, and added two more bricks and then did a rebalance.

---------------

root@home:~# gluster volume info
Volume Name: test
Type: Distribute
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: home:/tmp/export/t1
Brick2: home:/tmp/export/t2
Brick3: home:/tmp/export/t3
Options Reconfigured:
features.quota: on
root@home:~# 


root@home:~# stat /tmp/export/t*/etc /etc/
  File: `/tmp/export/t1/etc'
  Size: 12288     	Blocks: 32         IO Block: 4096   directory
Device: 801h/2049d	Inode: 279149      Links: 142
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-04-18 22:14:31.000000000 +0530
Modify: 2011-04-18 22:09:44.000000000 +0530
Change: 2011-04-18 22:14:26.000000000 +0530
  File: `/tmp/export/t2/etc'
  Size: 4096      	Blocks: 16         IO Block: 4096   directory
Device: 801h/2049d	Inode: 281860      Links: 142
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-04-18 22:14:56.000000000 +0530
Modify: 2011-04-18 22:09:44.000000000 +0530
Change: 2011-04-18 22:14:36.000000000 +0530
  File: `/tmp/export/t3/etc'
  Size: 4096      	Blocks: 16         IO Block: 4096   directory
Device: 801h/2049d	Inode: 281862      Links: 142
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-04-18 22:14:56.000000000 +0530
Modify: 2011-04-18 22:09:44.000000000 +0530
Change: 2011-04-18 22:14:36.000000000 +0530
  File: `/etc/'
  Size: 12288     	Blocks: 24         IO Block: 4096   directory
Device: 801h/2049d	Inode: 352945      Links: 142
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-04-18 22:13:50.000000000 +0530
Modify: 2011-04-18 22:10:15.000000000 +0530
Change: 2011-04-18 22:10:15.000000000 +0530

root@home:~# cp /etc/hosts /mnt/gfs/hosts
root@home:~# du -h /tmp/export/t*/hosts /etc/hosts
8.0K	/tmp/export/t3/hosts
4.0K	/etc/hosts
root@home:~# stat /tmp/export/t*/hosts /etc/hosts
  File: `/tmp/export/t3/hosts'
  Size: 382       	Blocks: 16         IO Block: 4096   regular file
Device: 801h/2049d	Inode: 280449      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-04-18 22:32:07.000000000 +0530
Modify: 2011-04-18 22:41:48.000000000 +0530
Change: 2011-04-18 22:41:48.000000000 +0530
  File: `/etc/hosts'
  Size: 382       	Blocks: 8          IO Block: 4096   regular file
Device: 801h/2049d	Inode: 353083      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-04-18 21:54:11.000000000 +0530
Modify: 2011-04-18 21:53:28.000000000 +0530
Change: 2011-04-18 21:53:28.000000000 +0530

--------

root@home:~# du -sh /mnt/gfs/etc /etc/ /tmp/export/t*/etc
71M	/mnt/gfs/etc                 # mount point
14M	/etc/                        # source
8.9M	/tmp/export/t1/etc           # bricks after rebalance
23M	/tmp/export/t2/etc
39M	/tmp/export/t3/etc

Comment 4 Anand Avati 2011-04-21 04:39:31 UTC
PATCH: http://patches.gluster.com/patch/7002 in master (features/marker: reduce the size corresponding to destination file if it is already present from parent directories.)

Comment 5 Saurabh 2011-04-22 04:43:42 UTC
This issue still remains intact,


[root@centos-qa-client-2 sbin]# ./gluster volume quota drep limit-usage /d1 2MB
limit set on /d1
[root@centos-qa-client-2 sbin]# ./gluster volume quota drep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                       6291456              4190208
/d2                     1048576              1048576
/d1                     2097152              2097152
[root@centos-qa-client-2 sbin]# ./gluster volume quota drep limit-usage /d3 1MB
limit set on /d3
[root@centos-qa-client-2 sbin]# ./gluster volume quota drep limit-usage /d3 1MB
limit set on /d3
[root@centos-qa-client-2 sbin]# ./gluster volume quota drep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                       6291456              5009408
/d2                     1048576              1048576
/d1                     2097152              2097152
/d3                     1048576               819200
[root@centos-qa-client-2 sbin]# ./gluster volume info drep

Volume Name: drep
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.1.12.134:/mnt/drep
Brick2: 10.1.12.135:/mnt/drep
Brick3: 10.1.12.134:/mnt/ddrep
Brick4: 10.1.12.135:/mnt/ddrep
Options Reconfigured:
features.limit-usage: /:6MB,/d2:1MB,/d1:2MB,/d3:1MB
features.quota: on
[root@centos-qa-client-2 sbin]# ./gluster volume add-brick 
Usage: volume add-brick <VOLNAME> <NEW-BRICK> ...
[root@centos-qa-client-2 sbin]# ./gluster volume add-brick drep 10.1.12.134:/mnt/dddrep 10.1135:/mnt/dddrep
[root@centos-qa-client-2 sbin]# ./gluster volume add-brick drep 10.1.12.134:/mnt/dddrep 10.1.12.135:/mnt/dddrep
Add Brick successful
[root@centos-qa-client-2 sbin]# ./gluster volume quota drep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                       6291456              5009408
/d2                     1048576              1048576
/d1                     2097152              2097152
/d3                     1048576               819200
[root@centos-qa-client-2 sbin]# ./gluster volume rebalance drep start
starting rebalance on volume drep has been successful
[root@centos-qa-client-2 sbin]# ./gluster volume drep status
unrecognized word: drep (position 1)
[root@centos-qa-client-2 sbin]# ./gluster volume rebalance drep status
rebalance completed: rebalanced 48 files of size 47000 (total files scanned 164)
[root@centos-qa-client-2 sbin]# ./gluster volume rebalance drep status
rebalance completed: rebalanced 48 files of size 47000 (total files scanned 164)
[root@centos-qa-client-2 sbin]# ./gluster volume rebalance drep status
rebalance completed: rebalanced 48 files of size 47000 (total files scanned 164)
[root@centos-qa-client-2 sbin]# ./gluster volume quota drep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                       6291456              6336512
/d2                     1048576              1060864
/d1                     2097152              2101248
/d3                     1048576              1081344
[root@centos-qa-client-2 sbin]# ./gluster volume quota drep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                       6291456              6336512
/d2                     1048576              1060864
/d1                     2097152              2101248
/d3                     1048576              1077248
[root@centos-qa-client-2 sbin]# ./gluster volume quota drep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                       6291456            131829760
/d2                     1048576              1060864
/d1                     2097152              2101248
/d3                     1048576              1077248
[root@centos-qa-client-2 sbin]# 



Infact the last quota list is followed by a selfheal on fuse mount, using

find . | xargs stat

Comment 6 Raghavendra G 2011-05-11 05:18:44 UTC
With recent set of patches, I've sent for this bug (which are yet to be accepted), add-brick followed by rebalance works fine. However there is a minor glitch when write-behind is on and that is the size stored on export directory is indeed the added contributions from all the nodes, but some directories have different contribution and size values. Without write-behind, there are no issues.

Comment 7 Anand Avati 2011-05-11 23:17:59 UTC
PATCH: http://patches.gluster.com/patch/7161 in master (features/marker: whitespace cleanup)

Comment 8 Junaid 2011-05-19 15:16:23 UTC
An interesting thing that I noticed while debugging this bug is that when we use the ia_size of struct iatt to calculate the size of the files, and then add a brick and do rebalance then the output of 
 
    >volume quota vol-name list 

gives the same result as it used to give before add brick, but if we use the convention 512*ia_blocks to calculate the file size then the output of list is not the same. But ls -lR on the mount point brings them back to sink. This is the work around for now, but will have to fix this.

Comment 9 Raghavendra G 2011-05-19 23:25:19 UTC
More Deeper problem than this is that code is racy. Inodelks are not held in all places where we are modifying xattrs. I am working on this and will send patches shortly.

Comment 10 Raghavendra G 2011-06-09 02:24:54 UTC
patches sent to master. Waiting for them to be accepted. yet to send for 3.2.

Comment 11 Anand Avati 2011-06-17 02:00:55 UTC
PATCH: http://patches.gluster.com/patch/7482 in master (features/marker-quota: fixes in rename path.)

Comment 12 Anand Avati 2011-06-17 02:01:01 UTC
PATCH: http://patches.gluster.com/patch/7483 in master (features/marker-quota: performance optimization.)

Comment 13 Anand Avati 2011-06-17 02:01:07 UTC
PATCH: http://patches.gluster.com/patch/7484 in master (libglusterfs/call-stub: Allow unwinding of frames for rename during call_resume_unwind.)

Comment 14 Anand Avati 2011-06-17 02:01:13 UTC
PATCH: http://patches.gluster.com/patch/7485 in master (features/marker: fixes in dirty inode self-heal codepath.)

Comment 15 Anand Avati 2011-06-17 02:01:19 UTC
PATCH: http://patches.gluster.com/patch/7486 in master (features/marker-quota: use mutexes while accessing contribution values.)

Comment 16 Anand Avati 2011-06-17 02:01:24 UTC
PATCH: http://patches.gluster.com/patch/7487 in master (marker-quota/rename: use contribution values from backend instead of in-memory while reducing parent sizes during rename)

Comment 17 Anand Avati 2011-06-17 02:01:30 UTC
PATCH: http://patches.gluster.com/patch/7488 in master (features/marker-quota: wipe parent_loc in marker_local_unref.)

Comment 18 Anand Avati 2011-06-17 02:01:36 UTC
PATCH: http://patches.gluster.com/patch/7489 in master (features/marker-quota: check for refcount being zero holding lock in quota_local_unref.)

Comment 19 Anand Avati 2011-06-17 02:01:41 UTC
PATCH: http://patches.gluster.com/patch/7490 in master (features/marker-quota: hold parent inodelk during creation of xattrs on directory.)

Comment 20 Anand Avati 2011-06-17 02:01:47 UTC
PATCH: http://patches.gluster.com/patch/7491 in master (features/marker-quota: hold lock on dirty inode's parent while healing a dirty inode.)

Comment 21 Anand Avati 2011-06-17 02:01:53 UTC
PATCH: http://patches.gluster.com/patch/7492 in master (features/marker-quota: use contribution value to reduce parent's size, if the value to be subtracted is not passed as argument to reduce_parent_size.)

Comment 22 Anand Avati 2011-06-20 00:41:26 UTC
PATCH: http://patches.gluster.com/patch/7174 in master (extras: Add quota-related debugging scripts.)

Comment 23 Anand Avati 2011-06-24 01:50:29 UTC
PATCH: http://patches.gluster.com/patch/7560 in master (features/marker-quota: Skip contribution creation on root.)

Comment 24 Anand Avati 2011-07-14 05:01:31 UTC
PATCH: http://patches.gluster.com/patch/7643 in master (Revert "features/marker-quota: hold lock on dirty inode's parent while healing a dirty inode.")

Comment 25 Anand Avati 2011-07-20 10:12:44 UTC
CHANGE: http://review.gluster.com/31 (  - remove xattrs from newpath after rename is complete.) merged in release-3.2 by Anand Avati (avati)

Comment 26 Saurabh 2011-08-09 02:27:31 UTC
[root@Centos1 ~]# /opt/qa/3.2.3/sbin/gluster volume quota dist-rep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                           1GB               10.5MB



[root@Centos1 ~]# /opt/qa/3.2.3/sbin/gluster volume info dist-rep 

Volume Name: dist-rep
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.1.12.134:/export/d1
Brick2: 10.1.12.135:/export/r1
Brick3: 10.1.12.134:/export/d2
Brick4: 10.1.12.135:/export/r2
Options Reconfigured:
features.limit-usage: /:1GB



[root@Centos1 ~]# /opt/qa/3.2.3/sbin/gluster volume add-brick dist-rep 10.1.12.134:/export/add-d1 10.1.12.135:/export/add-r1
Add Brick successful


[root@Centos1 ~]# /opt/qa/3.2.3/sbin/gluster volume rebalance dist-rep status
rebalance completed: rebalanced 15 files of size 15360 (total files scanned 101)


[root@Centos1 ~]# /opt/qa/3.2.3/sbin/gluster volume quota dist-rep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                           1GB               20.9MB



mount-type,
glusterfs#10.1.12.134:/dist-rep on /mnt/glusterfs-test type fuse (rw,allow_other,default_permissions,max_read=131072)


######################
As per my calculation also the data size should not be 20.9 MB, it should be 10.5MB(approx)
######################

Extended attributes,
###########################
getfattr -m . -d -e hex /export/d1
trusted.glusterfs.quota.size=0x0000000000a57000

getfattr -m . -d -e hex /export/d2
trusted.glusterfs.quota.size=0x0000000000647000

getfattr -m . -d -e hex /export/add-d1/
trusted.glusterfs.quota.size=0x0000000000445000


Information from mount point:-
#####################################

[root@Centos1 glusterfs-test]# ls -R
.:
dir1  dir2  f.1  f.10  f.2  f.3  f.4  f.5  f.6  f.7  f.8  f.9

./dir1:
f.1   f.11  f.13  f.15  f.17  f.19  f.20  f.22  f.24  f.26  f.28  f.3   f.4  f.6  f.8
f.10  f.12  f.14  f.16  f.18  f.2   f.21  f.23  f.25  f.27  f.29  f.30  f.5  f.7  f.9

./dir2:
f.1   f.11  f.13  f.15  f.17  f.19  f.20  f.22  f.24  f.26  f.28  f.3   f.4  f.6  f.8
f.10  f.12  f.14  f.16  f.18  f.2   f.21  f.23  f.25  f.27  f.29  f.30  f.5  f.7  f.9
[root@Centos1 glusterfs-test]# 

here the files inside the mount-point are made up with "2064" blocks each
and files inside directories dir1 and dir2 are made up with "16" blocks each.

Comment 27 Raghavendra G 2011-08-09 06:53:08 UTC
Its a known issue. Quota is not at fault here. Irrespective of whether quota is enabled or not, disk-usage is increased by doing rebalance and quota is just reporting that usage. Bug 2802 clearly explains this.

Comment 28 Saurabh 2011-08-09 09:53:13 UTC
so for add-brick I don't have any validity to perform except checking that the xattr's are not corrupted,
or do you have some measure(eg, by how much %age) or criteria about the change in size after rebalance?

Comment 29 Raghavendra G 2011-08-10 00:02:23 UTC
One test would be to add up sizes of all non-directories (which in turn should be equal to 512 * no-of-blocks) and verify whether that size is equal to size stored on the export directory. The size displayed for the volume should be equal to sum of sizes of export directories for distribute. You can make use of the quota related scripts present in <glusterfs-src>/extras directory for this purpose.

Comment 30 Saurabh 2011-08-25 05:46:53 UTC
I tried this and found a slight difference,

before add-brick, with 100 files,

[root@Centos1 nfs-test]# gluster volume quota dist-rep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                           1MB                1.0MB

after add-brick and rebalance,

[root@Centos1 nfs-test]# gluster volume quota dist-rep list
	path		  limit_set	     size
----------------------------------------------------------------------------------
/                           1MB                1.1MB

Comment 31 Amar Tumballi 2011-09-20 03:25:20 UTC
Need information on whether it is fixed with new version. (master/release-3.2)

Comment 32 Raghavendra G 2011-09-20 07:35:37 UTC
Apart from the increase in size during rebalance (which is most likely because of link-files created - which is not detected because quota enforcement is on client side), did you verify that size of all non-directories adds up to the size stored on the export directory?

regards,
Raghavendra.

Comment 33 Raghavendra G 2011-09-20 07:41:44 UTC
(In reply to comment #32)
> Apart from the increase in size during rebalance (which is most likely because
> of link-files created - which is not detected because quota enforcement is on
> client side), 

since quota is loaded for in the volfile for tmp mount-point created for rebalance, quota should not have allowed for size exceeding the limit. I've to investigate this issue.

>did you verify that size of all non-directories adds up to the
> size stored on the export directory?
> 
> regards,
> Raghavendra.

Comment 34 Raghavendra G 2011-09-22 02:15:11 UTC
(In reply to comment #30)

This behavior is seen only on master but not on release-3.2 branch. On master, the way rebalance is done has changed from glusterd doing
1. create tmp-file on target node,
2. read from source,
3. write to tmp-file
4. rename tmp-file to file

to glusterd just setting an xattr indidicating distribute to do data-migration.
The entire data-migration is done by distribute alone without involving glusterd. Since quota is loaded on top of distribute, the actual data-migration is never seen by quota and can result in exceeding the limit. Probably we can mark this as a known limitation.

regards,
Raghavendra.

> I tried this and found a slight difference,
> 
> before add-brick, with 100 files,
> 
> [root@Centos1 nfs-test]# gluster volume quota dist-rep list
>     path          limit_set         size
> ----------------------------------------------------------------------------------
> /                           1MB                1.0MB
> 
> after add-brick and rebalance,
> 
> [root@Centos1 nfs-test]# gluster volume quota dist-rep list
>     path          limit_set         size
> ----------------------------------------------------------------------------------
> /                           1MB                1.1MB

Comment 35 Saurabh 2011-10-03 09:07:10 UTC
[root@Centos3 new_scripts]# ./quota_wrapper.py 
/opt/glusterfs/3.3.0qa13/sbin/gluster volume create quota_dist_rep replica 2 10.1.12.14:/mnt/quota_dist_rep.1317642762 10.1.12.190:/mnt/quota_dist_rep.1317642762 10.1.12.14:/mnt/quota_dist_rep.1317642765 10.1.12.190:/mnt/quota_dist_rep.1317642765

Creation of volume quota_dist_rep has been successful. Please start the volume to access data.

quota_dist_rep

Starting volume quota_dist_rep has been successful


##### The fourth test starts. #####

 
Volume Name: quota_dist_rep
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.1.12.14:/mnt/quota_dist_rep.1317642762
Brick2: 10.1.12.190:/mnt/quota_dist_rep.1317642762
Brick3: 10.1.12.14:/mnt/quota_dist_rep.1317642765
Brick4: 10.1.12.190:/mnt/quota_dist_rep.1317642765

/opt/glusterfs/3.3.0qa13/sbin/gluster

Quota is already disabled
Quota command failed

/opt/glusterfs/3.3.0qa13/sbin/gluster

Enabling quota has been successful

/opt/glusterfs/3.3.0qa13/sbin/gluster

limit set on /


	path		  limit_set	     size
----------------------------------------------------------------------------------
/                           2MB               0Bytes

 creating the data
/mnt/quota_fuse.1317642770/d.0
/mnt/quota_fuse.1317642770/d.1
/mnt/quota_fuse.1317642770/d.2

	path		  limit_set	     size
----------------------------------------------------------------------------------
/                           2MB                1.9MB

adding the brick(s)

Add Brick successful


Add Brick successful


	path		  limit_set	     size
----------------------------------------------------------------------------------
/                           2MB                1.9MB

 recreating the data

	path		  limit_set	     size
----------------------------------------------------------------------------------
/                           2MB                2.0MB

Comment 36 Amar Tumballi 2013-02-04 10:52:26 UTC
With the understanding that gluster quota is distributed implementation, and its considering the blocks used by the backend, with add-brick, there is always the contribution by the new directories created in backend, which adds for the quota limit.

I would recommend keeping in mind below numbers in mind while testing quota. Minimum value set on the directory should be 1GB, and we should have 1-3% margin for exceeding limit when we do any of the distributed volume operations.


Note You need to log in before you can comment on or make changes to this bug.