Bug 1416313

Summary: Inode-Quota] Inode quota is not listing proper information and the file creation does not obey the inode limitation
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vivek Das <vdas>
Component: quotaAssignee: hari gowtham <hgowtham>
Status: CLOSED WONTFIX QA Contact: Rahul Hinduja <rhinduja>
Severity: medium Docs Contact:
Priority: low    
Version: rhgs-3.2CC: atumball, rcyriac, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: Accounting
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-19 08:57:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vivek Das 2017-01-25 09:12:21 UTC
Description of problem:
On a four node cluster with samba -ctdb , set up with all quota plugins over a volume. With hard limit and soft limit set to 0 When we set an inode quota limitation for a directory (say 1000) it is unable to obey the limitation and thus exceeds the limitation (say able to create 3000 files) over a windows mount, fuse mount and cifs mount as well separately.

Also the the command "gluster volume quota volname list-object" is also throwing messy informations regarding the data.

                  Path                   Hard-limit   Soft-limit      Files       Dirs     Available  Soft-limit exceeded? Hard-limit exceeded?
-----------------------------------------------------------------------------------------------------------------------------------------------
/Windows                                      1000       80%(800) 18446744073709551615         1        1000              No                   No
/Windows-ionode                               1000       80%(800)       3000         1           0             Yes                  Yes
/ludo                                         1000       80%(800)       3000         1           0             Yes                  Yes


Version-Release number of selected component (if applicable):
glusterfs-3.8.4-12.el7rhgs.x86_64
samba-client-4.4.6-4.el7rhgs.x86_64

How reproducible:
2/2

Steps to Reproduce:
1. Enable gluster volume with quota plugins
2. Set hard-limit & soft-limit to zero
3. Create a directory in the mount point say /ABC
4. Set inode count over ABC directory as 1000
5. Try to create 3000 or more files in that directory
6. check gluster volume quota volname list-object

Actual results:
Succeeds creating 3000 files

Expected results:
Should not exceed 1000 files

Additional info:
getfattr over bricks
1. getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/ludo_brick0/Windows-ionode/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.gfid=0x97af7a3c5d3a489db81ff912caab5a6d
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000000000000000000000000006020000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-objects.1=0x00000000000003e8ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000006020000000000000001
user.DOSATTRIB=0x3078313000000300030000001100000010000000000000000000000000000000000000000000000000bd3d62e476d2010000000000000000

2.getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/ludo_brick1/Windows-ionode/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.gfid=0x97af7a3c5d3a489db81ff912caab5a6d
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000000000000000000000000006020000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-objects.1=0x00000000000003e8ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000006020000000000000001
user.DOSATTRIB=0x3078313000000300030000001100000010000000000000000000000000000000000000000000000000bd3d62e476d2010000000000000000

3]. getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/ludo_brick2/Windows-ionode/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.gfid=0x97af7a3c5d3a489db81ff912caab5a6d
trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000000000000000000000000005b60000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-objects.1=0x00000000000003e8ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000005b60000000000000001
user.DOSATTRIB=0x3078313000000300030000001100000010000000000000000000000000000000000000000000000000bd3d62e476d2010000000000000000 

4]. getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/ludo_brick3/Windows-ionode/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.gfid=0x97af7a3c5d3a489db81ff912caab5a6d
trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000000000000000000000000005b60000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-objects.1=0x00000000000003e8ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000005b60000000000000001
user.DOSATTRIB=0x3078313000000300030000001100000010000000000000000000000000000000000000000000000000bd3d62e476d2010000000000000000

For directory name Windows

1]. getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/ludo_brick0/Windows/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.gfid=0x46312c753ab94ea1a562a2e285a7bea1
trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0xfffffffffffffe00ffffffffffffffff0000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-objects.1=0x00000000000003e8ffffffffffffffff
trusted.glusterfs.quota.size.1=0xfffffffffffffe00ffffffffffffffff0000000000000001
user.DOSATTRIB=0x3078313000000300030000001100000010000000000000000000000000000000000000000000000080c6baa2f172d2010000000000000000

2]. getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/ludo_brick1/Windows/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.gfid=0x46312c753ab94ea1a562a2e285a7bea1
trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000000000000000000000000000000000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-objects.1=0x00000000000003e8ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000001
user.DOSATTRIB=0x3078313000000300030000001100000010000000000000000000000000000000000000000000000080c6baa2f172d2010000000000000000

3]. getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/ludo_brick2/Windows/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.gfid=0x46312c753ab94ea1a562a2e285a7bea1
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000000000000000000000000000000000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-objects.1=0x00000000000003e8ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000001
user.DOSATTRIB=0x3078313000000300030000001100000010000000000000000000000000000000000000000000000080c6baa2f172d2010000000000000000

4]. getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/ludo_brick3/Windows/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.gfid=0x46312c753ab94ea1a562a2e285a7bea1
trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000000000000000000000000000000000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-objects.1=0x00000000000003e8ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000001
user.DOSATTRIB=0x3078313000000300030000001100000010000000000000000000000000000000000000000000000080c6baa2f172d2010000000000000000

Comment 2 Sanoj Unnikrishnan 2017-01-25 10:40:29 UTC
On the Windows-ionode path 
File count is accounted correctly in xattr,
0x5b6 =1432
0x602 = 1538
==========
3000 (actual number of files)

so is the limit,
limit = 0x3e8 => 1000
Nothing logged by the brick server process
Need, to check if there is any issue in the in core values populated in inode-ctx

Comment 3 Sanoj Unnikrishnan 2017-01-25 12:20:17 UTC
Regarding, incorrect file count (18446744073709551615 = -1) on /windows path
The xattr has the correct value, however the incore value was transiently seen to be -1 for file count. 
Notice this in the second time break point was hit from log below.

Breakpoint 1, quota_check_object_limit (frame=frame@entry=0x7fc970100bbc, ctx=ctx@entry=0x7fc9400186c0, priv=priv@entry=0x7fc960044120, _inode=_inode@entry=0x7fc94a26b7c4, 
    this=this@entry=0x7fc96001edd0, op_errno=op_errno@entry=0x7fc95d340718, just_validated=just_validated@entry=0, local=local@entry=0x7fc9600449a0, 
    skip_check=skip_check@entry=0x7fc95d34071c) at quota.c:1091
1091	{
(gdb) x/16xb _inode->gfid
0x7fc94a26b7cc:	0x46	0x31	0x2c	0x75	0x3a	0xb9	0x4e	0xa1
0x7fc94a26b7d4:	0xa5	0x62	0xa2	0xe2	0x85	0xa7	0xbe	0xa1
(gdb) p *ctx
$1 = {size = 278528, hard_lim = 0, soft_lim = 0, file_count = 544, dir_count = 1, object_hard_lim = 1000, object_soft_lim = 800, buf = {ia_ino = 11917266657764425377, 
    ia_gfid = "F1,u:\271N\241\245b\242Ⅷ\276\241", ia_dev = 64807, ia_type = IA_IFDIR, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', 
        write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}, ia_nlink = 2, 
    ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 6, ia_blksize = 4096, ia_blocks = 1, ia_atime = 1484898529, ia_atime_nsec = 192522594, ia_mtime = 1485249571, ia_mtime_nsec = 135749326, 
    ia_ctime = 1485249571, ia_ctime_nsec = 142749390}, parents = {next = 0x7fc940018768, prev = 0x7fc940018768}, tv = {tv_sec = 1484906525, tv_usec = 554841}, prev_log = {tv_sec = 0, 
    tv_usec = 0}, ancestry_built = _gf_true, lock = {spinlock = 1, mutex = {__data = {__lock = 1, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, 
          __next = 0x0}}, __size = "\001", '\000' <repeats 38 times>, __align = 1}}}
(gdb) c
Continuing.

Breakpoint 1, quota_check_object_limit (frame=frame@entry=0x7fc970100bbc, ctx=ctx@entry=0x7fc9400186c0, priv=priv@entry=0x7fc960044120, _inode=_inode@entry=0x7fc94a26b7c4, 
    this=this@entry=0x7fc96001edd0, op_errno=op_errno@entry=0x7fc95d3407f8, just_validated=just_validated@entry=1, local=local@entry=0x7fc9600449a0, 
    skip_check=skip_check@entry=0x7fc95d3407fc) at quota.c:1091
1091	{
(gdb) p *ctx
$2 = {size = -512, hard_lim = 0, soft_lim = 0, file_count = -1, dir_count = 1, object_hard_lim = 1000, object_soft_lim = 800, buf = {ia_ino = 11917266657764425377, 
    ia_gfid = "F1,u:\271N\241\245b\242Ⅷ\276\241", ia_dev = 64807, ia_type = IA_IFDIR, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', 
        write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}, ia_nlink = 2, 
    ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 6, ia_blksize = 4096, ia_blocks = 1, ia_atime = 1484898529, ia_atime_nsec = 192522594, ia_mtime = 1485249571, ia_mtime_nsec = 135749326, 
    ia_ctime = 1485249571, ia_ctime_nsec = 142749390}, parents = {next = 0x7fc940018768, prev = 0x7fc940018768}, tv = {tv_sec = 1485343452, tv_usec = 6169}, prev_log = {tv_sec = 0, 
    tv_usec = 0}, ancestry_built = _gf_true, lock = {spinlock = 1, mutex = {__data = {__lock = 1, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, 
          __next = 0x0}}, __size = "\001", '\000' <repeats 38 times>, __align = 1}}}
(gdb) x/16xb _inode->gfid
0x7fc94a26b7cc:	0x46	0x31	0x2c	0x75	0x3a	0xb9	0x4e	0xa1
0x7fc94a26b7d4:	0xa5	0x62	0xa2	0xe2	0x85	0xa7	0xbe	0xa1
(gdb) c
Continuing.


After a couple of operations (mostly touch new_file, rm), The incore values seemed to be corrected and the listing is also correct

[root@dhcp47-12 ludo]# gluster v quota ludo list-objects
                  Path                   Hard-limit   Soft-limit      Files       Dirs     Available  Soft-limit exceeded? Hard-limit exceeded?
-----------------------------------------------------------------------------------------------------------------------------------------------
/Windows                                      1000       80%(800)          1         1         998              No                   No
/Windows-ionode                               1000       80%(800)       2889         1           0             Yes                  Yes
/ludo                                         1000       80%(800)       3000         1           0             Yes                  Yes

Continuing RCA..

Comment 11 hari gowtham 2018-11-19 08:57:41 UTC
Hi,

Vivek, I'm closing this bug as we are not actively working on quota and a workaround for the accounting bugs on quota is made available through https://review.gluster.org/#/c/glusterfs/+/19179/ 

-Hari.