Description of problem: issue is that I am getting different values of quota stats from quota list comamnd on different nodes of the cluster. Version-Release number of selected component (if applicable): glusterfs-3.4.0.44rhs-1 How reproducible: seen for more than one directory Steps to Reproduce: 1. create a volume, start it 2. enable quota 3. mount it over nfs 4. create few thousand directories. say 1000 5. being a having set up of four nodes[1-4] cluster, stop glusterd and kill brick processes on node 2 and node 4 6. on each directory set limit of 10GB 7. start creating data inside the these directories, till their individual limit is not reached. 8. start glusterd on nodes 2 and 4, this will bring the bricks also back 9. take quota stats using quota list command, on all the nodes. Actual results: result after step 9., from node1, Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /57 10.0GB 80% 4.5GB 5.5GB from node2, [root@quota6 ~]# gluster volume quota dist-rep list /57 Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /57 10.0GB 80% 5.5GB 4.5GB from node3, [root@quota7 ~]# gluster volume quota dist-rep list /57 Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /57 10.0GB 80% 10.0GB 0Bytes from node4, [root@quota8 ~]# gluster volume quota dist-rep list /57 Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /57 10.0GB 80% 0Bytes 10.0GB Also for some directories the information is different on same node on each invocation of quota list command Expected results: The info at one time remain same for a certain directory on all nodes. Also, each invocation of quota list command the result should be same not different. Additional info: After invoking "find . | xargs stat" on the nfs mount-point after almost an hour of step9 mentioned in "Steps to Reproduce" section, the result our found to be good. Presently, considering this issue may be related to self-heal taking time.
Could you provide the output of "gluster volume info VOLNAME"?
After a conversation with KP and Pranith, this bug was decided to be documented as a known issue in the Big Bend Update 1 Release Notes. Here is the link: http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1/html/2.1_Update_1_Release_Notes/chap-Documentation-2.1_Update_1_Release_Notes-Known_Issues.html
Not able to recreate the problem in 3.7.4. So closing the bug.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days