Bug 1416313
Summary: | Inode-Quota] Inode quota is not listing proper information and the file creation does not obey the inode limitation | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Vivek Das <vdas> |
Component: | quota | Assignee: | hari gowtham <hgowtham> |
Status: | CLOSED WONTFIX | QA Contact: | Rahul Hinduja <rhinduja> |
Severity: | medium | Docs Contact: | |
Priority: | low | ||
Version: | rhgs-3.2 | CC: | atumball, rcyriac, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | Accounting | ||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-11-19 08:57:41 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vivek Das
2017-01-25 09:12:21 UTC
On the Windows-ionode path File count is accounted correctly in xattr, 0x5b6 =1432 0x602 = 1538 ========== 3000 (actual number of files) so is the limit, limit = 0x3e8 => 1000 Nothing logged by the brick server process Need, to check if there is any issue in the in core values populated in inode-ctx Regarding, incorrect file count (18446744073709551615 = -1) on /windows path The xattr has the correct value, however the incore value was transiently seen to be -1 for file count. Notice this in the second time break point was hit from log below. Breakpoint 1, quota_check_object_limit (frame=frame@entry=0x7fc970100bbc, ctx=ctx@entry=0x7fc9400186c0, priv=priv@entry=0x7fc960044120, _inode=_inode@entry=0x7fc94a26b7c4, this=this@entry=0x7fc96001edd0, op_errno=op_errno@entry=0x7fc95d340718, just_validated=just_validated@entry=0, local=local@entry=0x7fc9600449a0, skip_check=skip_check@entry=0x7fc95d34071c) at quota.c:1091 1091 { (gdb) x/16xb _inode->gfid 0x7fc94a26b7cc: 0x46 0x31 0x2c 0x75 0x3a 0xb9 0x4e 0xa1 0x7fc94a26b7d4: 0xa5 0x62 0xa2 0xe2 0x85 0xa7 0xbe 0xa1 (gdb) p *ctx $1 = {size = 278528, hard_lim = 0, soft_lim = 0, file_count = 544, dir_count = 1, object_hard_lim = 1000, object_soft_lim = 800, buf = {ia_ino = 11917266657764425377, ia_gfid = "F1,u:\271N\241\245b\242Ⅷ\276\241", ia_dev = 64807, ia_type = IA_IFDIR, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}, ia_nlink = 2, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 6, ia_blksize = 4096, ia_blocks = 1, ia_atime = 1484898529, ia_atime_nsec = 192522594, ia_mtime = 1485249571, ia_mtime_nsec = 135749326, ia_ctime = 1485249571, ia_ctime_nsec = 142749390}, parents = {next = 0x7fc940018768, prev = 0x7fc940018768}, tv = {tv_sec = 1484906525, tv_usec = 554841}, prev_log = {tv_sec = 0, tv_usec = 0}, ancestry_built = _gf_true, lock = {spinlock = 1, mutex = {__data = {__lock = 1, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = "\001", '\000' <repeats 38 times>, __align = 1}}} (gdb) c Continuing. Breakpoint 1, quota_check_object_limit (frame=frame@entry=0x7fc970100bbc, ctx=ctx@entry=0x7fc9400186c0, priv=priv@entry=0x7fc960044120, _inode=_inode@entry=0x7fc94a26b7c4, this=this@entry=0x7fc96001edd0, op_errno=op_errno@entry=0x7fc95d3407f8, just_validated=just_validated@entry=1, local=local@entry=0x7fc9600449a0, skip_check=skip_check@entry=0x7fc95d3407fc) at quota.c:1091 1091 { (gdb) p *ctx $2 = {size = -512, hard_lim = 0, soft_lim = 0, file_count = -1, dir_count = 1, object_hard_lim = 1000, object_soft_lim = 800, buf = {ia_ino = 11917266657764425377, ia_gfid = "F1,u:\271N\241\245b\242Ⅷ\276\241", ia_dev = 64807, ia_type = IA_IFDIR, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}, ia_nlink = 2, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 6, ia_blksize = 4096, ia_blocks = 1, ia_atime = 1484898529, ia_atime_nsec = 192522594, ia_mtime = 1485249571, ia_mtime_nsec = 135749326, ia_ctime = 1485249571, ia_ctime_nsec = 142749390}, parents = {next = 0x7fc940018768, prev = 0x7fc940018768}, tv = {tv_sec = 1485343452, tv_usec = 6169}, prev_log = {tv_sec = 0, tv_usec = 0}, ancestry_built = _gf_true, lock = {spinlock = 1, mutex = {__data = {__lock = 1, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = "\001", '\000' <repeats 38 times>, __align = 1}}} (gdb) x/16xb _inode->gfid 0x7fc94a26b7cc: 0x46 0x31 0x2c 0x75 0x3a 0xb9 0x4e 0xa1 0x7fc94a26b7d4: 0xa5 0x62 0xa2 0xe2 0x85 0xa7 0xbe 0xa1 (gdb) c Continuing. After a couple of operations (mostly touch new_file, rm), The incore values seemed to be corrected and the listing is also correct [root@dhcp47-12 ludo]# gluster v quota ludo list-objects Path Hard-limit Soft-limit Files Dirs Available Soft-limit exceeded? Hard-limit exceeded? ----------------------------------------------------------------------------------------------------------------------------------------------- /Windows 1000 80%(800) 1 1 998 No No /Windows-ionode 1000 80%(800) 2889 1 0 Yes Yes /ludo 1000 80%(800) 3000 1 0 Yes Yes Continuing RCA.. Hi, Vivek, I'm closing this bug as we are not actively working on quota and a workaround for the accounting bugs on quota is made available through https://review.gluster.org/#/c/glusterfs/+/19179/ -Hari. |