Bug 1421933 - Quota is not listing proper information and leading to disc full status
Summary: Quota is not listing proper information and leading to disc full status
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard: Accounting
Depends On:
Blocks: 1399066
TreeView+ depends on / blocked
 
Reported: 2017-02-14 06:03 UTC by Sanoj Unnikrishnan
Modified: 2018-11-20 07:00 UTC (History)
6 users (show)

Fixed In Version:
Clone Of: 1399066
Environment:
Last Closed: 2018-11-20 07:00:13 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Sanoj Unnikrishnan 2017-02-14 06:03:51 UTC
Description of problem:
On a four node samba ctdb setup enable quota and set the quota limit. When we do a windows mount it shows disc size full and does not allow to right any data to it where as the share is completely empty without any data present in it.
gluster volume quota volname list is showing used space as 16384.0PB.
Even on cifs mount right is not possible because of the disc size full issue.

Steps to Reproduce:
1.Enable quota on a samba-ctdb setup
2.Provide quota limit
3.Mount share on windows and check properties
4.Try to write data any data to it

Actual results:
Disc size is full and does not allow any data

Expected results:
Should show proper info and should not block IO


localhost:samba-official                     4.0G  4.0G     0 100% /run/gluster/samba-official
\\10.70.43.214\gluster-samba-official        4.0G  4.0G     0 100% /mnt/cifs
10.70.43.214:/samba-official                 4.0G  4.0G     0 100% /mnt/fuse




gluster volume quota samba-official list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                          4.0GB     80%(3.2GB) 16384.0PB   4.0GB              No                   No


Moving on, i disable and enable quota after removing all quota limitation.
Now when i set quota limitation again on root share & subfolder. The "gluster volume quota VOLNAME list" command showed no data i.e NA in all the sections (soft limit, hard limit, used etc) of the list.

After using the setup for almost 15minutes that is creating files in the share and doing basic operation the "gluster volume quota VOLNAME list" command started showing correct data and is working fine for now.

So somewhere the quota accounting is getting messed up. Sanoj can you please look in to this and provide information regarding how did it showed 16384 PB of used space in first place.

--- Additional comment from Sanoj Unnikrishnan on 2016-11-30 00:24:35 EST ---

As we saw that initial bits of trusted.glusterfs.quota.size xattr on the bricks were FFFFFFFFFFF, Its evident that the accounting is wrong. The remaining bits representing file count/dir count were correct.
We need to know if the issue is with quota crawler (which populates xattrs initially) or the xattr update or was there a race between these two? 
Since Quota disable followed by enable resolved the issue - looks like crawler alone may not have led  to the issue.
To RCA further, we need to know,
1) Was there any io in progess while quota was being enabled ?
2) Was there any operations done between step 1 and 2 ?


    No operations were performed between step1 & step2. There was just 1 empty directory present in the share way before enabling quota.



Issue seen again on a different setup, below are xattr values:


[root@n2 ~]# getfattr -d -m. -ehex /bricks/brick{1..19}
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1
trusted.glusterfs.quota.size.1=0xfffffef2a0210c00fffffffffffc7d82fffffffffffe985b

# file: bricks/brick2
trusted.glusterfs.quota.size.1=0xfffffc15845a0600fffffffffff9788ffffffffffffd18d3

# file: bricks/brick3
trusted.glusterfs.quota.size.1=0xfffffd4ee747fc00fffffffffffc2dc9fffffffffffe94f5

# file: bricks/brick4
trusted.glusterfs.quota.size.1=0x00000045889cbc000000000000002b3f000000000000188a

# file: bricks/brick5
trusted.glusterfs.quota.size.1=0x0000000c3b8af800000000000000461d0000000000001856

# file: bricks/brick6
trusted.glusterfs.quota.size.1=0xfffff54adfe90600fffffffffff15e6ffffffffffffa26b3

# file: bricks/brick7
trusted.glusterfs.quota.size.1=0xfffffd94e50cb000fffffffffffc2fa2fffffffffffe96c5

# file: bricks/brick8
trusted.glusterfs.quota.size.1=0x0000000582071a00000000000000155d0000000000001679

# file: bricks/brick9
trusted.glusterfs.quota.size.1=0xfffff54796d28a00fffffffffff060f9fffffffffffa1729

# file: bricks/brick10
trusted.glusterfs.quota.size.1=0xfffffd5dff0bec00fffffffffffca08dfffffffffffea115

# file: bricks/brick11
trusted.glusterfs.quota.size.1=0x000000094a32e6000000000000001c230000000000001784

# file: bricks/brick12
trusted.glusterfs.quota.size.1=0xfffffd620926ca00fffffffffffc38e8fffffffffffe93ee

# file: bricks/brick13
trusted.glusterfs.quota.size.1=0xfffffbad874b2e00fffffffffff8af2bfffffffffffd1e47

# file: bricks/brick14
trusted.glusterfs.quota.size.1=0xfffffd4176578000fffffffffffbf967fffffffffffe9175

# file: bricks/brick15
trusted.glusterfs.quota.size.1=0xfffffb2933058400fffffffffff88dd4fffffffffffd1ba3

# file: bricks/brick16
trusted.glusterfs.quota.size.1=0xffffffe44b89bc0000000000000013970000000000000fa3

# file: bricks/brick17
trusted.glusterfs.quota.size.1=0xffffffeb667bee00000000000000a1440000000000001bd1

# file: bricks/brick18
trusted.glusterfs.quota.size.1=0xfffffff34b2826000000000000000735000000000000123c

# file: bricks/brick19
trusted.glusterfs.quota.size.1=0xfffffae6bf833800fffffffffff8a20cfffffffffffd1e6e

Comment 3 hari gowtham 2018-11-20 07:00:13 UTC
Hi,

Quota is not actively developed. We are closing this bug. If someone is willing to fix it, feel free to reopen it and take it up.

-Hari.


Note You need to log in before you can comment on or make changes to this bug.