Bug 1032449 - quota: different values of quota stats on each invocation from different nodes, scenario includes self-heal as well
Summary: quota: different values of quota stats on each invocation from different node...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: 2.1
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: krishnan parthasarathi
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1020127
TreeView+ depends on / blocked
 
Reported: 2013-11-20 08:55 UTC by Saurabh
Modified: 2023-09-14 01:54 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
In the case when two or more bricks experience a downtime and data is written to their replica bricks, invoking the quota list command on that multi-node cluster displays different outputs after the bricks are back online.
Clone Of:
Environment:
Last Closed: 2015-10-09 11:20:29 UTC
Embargoed:


Attachments (Terms of Use)

Description Saurabh 2013-11-20 08:55:37 UTC
Description of problem:
issue is that I am getting different values of quota stats from quota list comamnd on different nodes of the cluster.

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.44rhs-1

How reproducible:
seen for more than one directory

Steps to Reproduce:
1. create a volume, start it
2. enable quota
3. mount it over nfs
4. create few thousand directories. say 1000

5. being a having set up of four nodes[1-4] cluster, stop glusterd and kill brick processes on node 2 and node 4

6. on each directory set limit of 10GB
7. start creating data inside the these directories, till their individual limit is not reached.
8. start glusterd on nodes 2 and 4, this will bring the bricks also back
9. take quota stats using quota list command, on all the nodes.


Actual results:
result after step 9.,
from node1,
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/57                                       10.0GB       80%       4.5GB   5.5GB

from node2,
[root@quota6 ~]# gluster volume quota dist-rep list /57
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/57                                       10.0GB       80%       5.5GB   4.5GB

from node3,
[root@quota7 ~]# gluster volume quota dist-rep list /57
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/57                                       10.0GB       80%      10.0GB  0Bytes

from node4,
[root@quota8 ~]# gluster volume quota dist-rep list /57
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/57                                       10.0GB       80%      0Bytes  10.0GB


Also for some directories the information is different on same node on each invocation of quota list command


Expected results:
The info at one time remain same for a certain directory on all nodes.
Also, each invocation of quota list command the result should be same not different.

Additional info:
After invoking "find . | xargs stat" on the nfs mount-point after almost an hour of step9 mentioned in "Steps to Reproduce" section, the result our found to be good.

Presently, considering this issue may be related to self-heal taking time.

Comment 2 krishnan parthasarathi 2013-11-22 04:43:08 UTC
Could you provide the output of "gluster volume info VOLNAME"?

Comment 3 Pavithra 2013-11-25 09:48:28 UTC
After a conversation with KP and Pranith, this bug was decided to be documented as a known issue in the Big Bend Update 1 Release Notes. Here is the link:

http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1/html/2.1_Update_1_Release_Notes/chap-Documentation-2.1_Update_1_Release_Notes-Known_Issues.html

Comment 4 Manikandan 2015-10-09 11:20:29 UTC
Not able to recreate the problem in 3.7.4. So closing the bug.

Comment 6 Red Hat Bugzilla 2023-09-14 01:54:06 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.