Bug 1032449 - quota: different values of quota stats on each invocation from different nodes, scenario includes self-heal as well [NEEDINFO]
quota: different values of quota stats on each invocation from different node...
Status: CLOSED WORKSFORME
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: quota (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: krishnan parthasarathi
storage-qa-internal@redhat.com
:
Depends On:
Blocks: 1020127
  Show dependency treegraph
 
Reported: 2013-11-20 03:55 EST by Saurabh
Modified: 2016-09-17 08:37 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
In the case when two or more bricks experience a downtime and data is written to their replica bricks, invoking the quota list command on that multi-node cluster displays different outputs after the bricks are back online.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-09 07:20:29 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
kparthas: needinfo? (saujain)


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-11-20 03:55:37 EST
Description of problem:
issue is that I am getting different values of quota stats from quota list comamnd on different nodes of the cluster.

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.44rhs-1

How reproducible:
seen for more than one directory

Steps to Reproduce:
1. create a volume, start it
2. enable quota
3. mount it over nfs
4. create few thousand directories. say 1000

5. being a having set up of four nodes[1-4] cluster, stop glusterd and kill brick processes on node 2 and node 4

6. on each directory set limit of 10GB
7. start creating data inside the these directories, till their individual limit is not reached.
8. start glusterd on nodes 2 and 4, this will bring the bricks also back
9. take quota stats using quota list command, on all the nodes.


Actual results:
result after step 9.,
from node1,
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/57                                       10.0GB       80%       4.5GB   5.5GB

from node2,
[root@quota6 ~]# gluster volume quota dist-rep list /57
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/57                                       10.0GB       80%       5.5GB   4.5GB

from node3,
[root@quota7 ~]# gluster volume quota dist-rep list /57
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/57                                       10.0GB       80%      10.0GB  0Bytes

from node4,
[root@quota8 ~]# gluster volume quota dist-rep list /57
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/57                                       10.0GB       80%      0Bytes  10.0GB


Also for some directories the information is different on same node on each invocation of quota list command


Expected results:
The info at one time remain same for a certain directory on all nodes.
Also, each invocation of quota list command the result should be same not different.

Additional info:
After invoking "find . | xargs stat" on the nfs mount-point after almost an hour of step9 mentioned in "Steps to Reproduce" section, the result our found to be good.

Presently, considering this issue may be related to self-heal taking time.
Comment 2 krishnan parthasarathi 2013-11-21 23:43:08 EST
Could you provide the output of "gluster volume info VOLNAME"?
Comment 3 Pavithra 2013-11-25 04:48:28 EST
After a conversation with KP and Pranith, this bug was decided to be documented as a known issue in the Big Bend Update 1 Release Notes. Here is the link:

http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1/html/2.1_Update_1_Release_Notes/chap-Documentation-2.1_Update_1_Release_Notes-Known_Issues.html
Comment 4 Manikandan 2015-10-09 07:20:29 EDT
Not able to recreate the problem in 3.7.4. So closing the bug.

Note You need to log in before you can comment on or make changes to this bug.