Bug 1030935

Summary: quota: list command displays different values with <path> mentioned and in whole list of the volume
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Saurabh <saujain>
Component: quotaAssignee: Vijaikumar Mallikarjuna <vmallika>
Status: CLOSED NOTABUG QA Contact: storage-qa-internal <storage-qa-internal>
Severity: high Docs Contact:
Priority: medium    
Version: 2.1CC: mzywusko, rhs-bugs, smohan, storage-qa-internal, vagarwal, vbellur, vmallika
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-04-20 09:04:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Saurabh 2013-11-15 11:16:24 UTC
Description of problem:
The used field for the same directory is showing different values when quota list is executed with <path of directory> and without path being mentioned.

When we execute quota list command without path it actually reports quota stats for whole of volume, where as while the path is provided to the quota list command, quota stats for that individual path are reported.

I had created 2TB of data earlier via nfs and glusterfs mounts for different directories.
Now, self-heal was going in background, meanwhile I killed quotad on three different nodes restarted the processes again, followed by volume reset and quota-deem-statfs off/on. 

Meanwhile self-heal was in progress I invoked "rm -rf" on nfs mount, but didn't delete completely but partially. once this finished did the same on glusterfs mount as well.

Now at this time only self-heal is running. No, I/O going on from the mount-point, still I see a difference in the quota stats being reported by both the ways.


Version-Release number of selected component (if applicable):
glusterfs-3.4.0.44rhs-1

How reproducible:
seen on one cluster

Steps to Reproduce:
rather I am providing a history of commands being executed

1. collecting quota stats after data creation
  946  gluster volume quota dist-rep list ---> worked fine
  
2.  meanhile self-heal as bricks are started again, some bricks were intentionally killed while data was getting created,

3. killed the quotad process on three nodes, after some time restarted the quotad processes.

4. started rm -rf on nfs mount

5. remove the quota on the volume
  
  972  gluster volume quota dist-rep remove / ----> works fine

6. list again 
  973  gluster volume quota dist-rep list  ----> works fine

7. volume reset  
  
  976  gluster volume reset dist-rep 

8. quota-deem-statfs to on, as reset puts it back to off
 1004  gluster volume set dist-rep quota-deem-statfs on
 

Actual results:
[root@quota5 ~]# 
[root@quota5 ~]# gluster volume quota dist-rep list /qa1
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/qa1                                       1.0TB       80%     208.3GB 815.7GB
[root@quota5 ~]# 
[root@quota5 ~]# gluster volume quota dist-rep list /qa2
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/qa2                                       1.0TB       80%     272.6GB 751.4GB
[root@quota5 ~]# 
[root@quota5 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/qa2                                       1.0TB       80%     472.5GB 551.5GB
/qa1                                       1.0TB       80%     361.3GB 662.7GB
/qa1/nfs-dir                               2.0TB       80%     361.3GB   1.6TB
/qa2/glusterfs-dir                         2.0TB       80%     472.5GB   1.5TB
/dir/dir1/dir2/dir3/dir4                 100.5GB       80%      0Bytes 100.5GB
/dir                                      50.5GB       80%      0Bytes  50.5GB
[root@quota5 ~]# 
[root@quota5 ~]# df -h /var/run/gluster/dist-rep/qa1
Filesystem            Size  Used Avail Use% Mounted on
localhost:dist-rep    5.5T  1.6T  3.9T  29% /var/run/gluster/dist-rep
[root@quota5 ~]# df -h /var/run/gluster/dist-rep/qa2
Filesystem            Size  Used Avail Use% Mounted on
localhost:dist-rep    4.0T  1.3T  2.7T  32% /var/run/gluster/dist-rep
[root@quota5 ~]# 
[root@quota5 ~]# 
[root@quota5 ~]# gluster volume info dist-rep
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: ecb4b311-dcb8-4929-933e-26c7f9a42d87
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.35.188:/rhs/brick1/d1r1
Brick2: 10.70.35.108:/rhs/brick1/d1r2
Brick3: 10.70.35.191:/rhs/brick1/d2r1
Brick4: 10.70.35.144:/rhs/brick1/d2r2
Brick5: 10.70.35.188:/rhs/brick1/d3r1
Brick6: 10.70.35.108:/rhs/brick1/d3r2
Brick7: 10.70.35.191:/rhs/brick1/d4r1
Brick8: 10.70.35.144:/rhs/brick1/d4r2
Brick9: 10.70.35.188:/rhs/brick1/d5r1
Brick10: 10.70.35.108:/rhs/brick1/d5r2
Brick11: 10.70.35.191:/rhs/brick1/d6r1
Brick12: 10.70.35.144:/rhs/brick1/d6r2
Options Reconfigured:
features.quota-deem-statfs: on
features.quota: on


Expected results:
altogether the data should match in both ways of reporting, as there is no more I/O going on and all the processes related to cluster are up.

Additional info:
[root@quota6 ~]# gluster volume status dist-rep
Status of volume: dist-rep
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.35.188:/rhs/brick1/d1r1			N/A	Y	11268
Brick 10.70.35.108:/rhs/brick1/d1r2			N/A	Y	18267
Brick 10.70.35.191:/rhs/brick1/d2r1			49152	Y	2468
Brick 10.70.35.144:/rhs/brick1/d2r2			N/A	Y	18102
Brick 10.70.35.188:/rhs/brick1/d3r1			N/A	Y	11274
Brick 10.70.35.108:/rhs/brick1/d3r2			N/A	Y	18291
Brick 10.70.35.191:/rhs/brick1/d4r1			49153	Y	2479
Brick 10.70.35.144:/rhs/brick1/d4r2			N/A	Y	18108
Brick 10.70.35.188:/rhs/brick1/d5r1			N/A	Y	11279
Brick 10.70.35.108:/rhs/brick1/d5r2			N/A	Y	18279
Brick 10.70.35.191:/rhs/brick1/d6r1			49154	Y	2490
Brick 10.70.35.144:/rhs/brick1/d6r2			N/A	Y	18113
NFS Server on localhost					2049	Y	18309
Self-heal Daemon on localhost				N/A	Y	18316
Quota Daemon on localhost				N/A	Y	19883
NFS Server on 10.70.35.144				2049	Y	18142
Self-heal Daemon on 10.70.35.144			N/A	Y	18149
Quota Daemon on 10.70.35.144				N/A	Y	19712
NFS Server on 10.70.35.191				2049	Y	2506
Self-heal Daemon on 10.70.35.191			N/A	Y	2512
Quota Daemon on 10.70.35.191				N/A	Y	16441
NFS Server on 10.70.35.188				2049	Y	11332
Self-heal Daemon on 10.70.35.188			N/A	Y	11339
Quota Daemon on 10.70.35.188				N/A	Y	11346
 
There are no active volume tasks