Bug 1010248 - quota: Used field shows data in "PB" after deleting data from volume
quota: Used field shows data in "PB" after deleting data from volume
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Raghavendra G
Ben Turner
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-20 07:20 EDT by Saurabh
Modified: 2016-01-19 01:13 EST (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.40rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-27 10:40:21 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
sosreport (16.35 MB, application/x-xz)
2013-09-20 07:46 EDT, Saurabh
no flags Details

  None (edit)
Description Saurabh 2013-09-20 07:20:11 EDT
Description of problem:
volume type is 2x2,
nfs mounted, and then created some data.
after that enabled quota, set limit of 1GB
and on the nfs mount point I deleted data, using "rm -rf"
then did a list for the volume.


Version-Release number of selected component (if applicable):
glusterfs-3.4.0.33rhs-1.el6rhs.x86_64

How reproducible:
executed the below and seen the result

Steps to Reproduce:
1.nfs mounted a volume, and then created some data.
2.after that enabled quota, set limit of 1GB  
3.and on the nfs mount point I deleted data, using "rm -rf"
4. executed quota list command.


Actual results:
[root@nfs1 bricks]# gluster volume quota newvol22 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%   16384.0PB   1.0GB

Expected results:
when data is not there, then it should not display data in PB.

Additional info:
Comment 2 Saurabh 2013-09-20 07:46:10 EDT
Created attachment 800455 [details]
sosreport
Comment 3 Raghavendra G 2013-09-30 04:33:28 EDT
The bug is not reproducible on 3.4.0-rhs-33.

[root@vm1 nfs]# gluster volume quota dist-repl disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success
[root@vm1 nfs]# cp -rf /usr .
^C
[root@vm1 nfs]# gluster volume quota dist-repl enable
volume quota : success
[root@vm1 nfs]# gluster volume quota dist-repl limit-usage / 1GB
volume quota : success
[root@vm1 nfs]# gluster volume quota dist-repl list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%     545.9MB 478.1MB
[root@vm1 nfs]# rm -rf *
[root@vm1 nfs]# gluster volume quota dist-repl list 
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%      0Bytes   1.0GB

Can you please confirm that the bug is not reproducible?
Comment 4 Sachidananda Urs 2013-10-07 06:55:21 EDT
To reproduce this bug perform the following steps:

1. Setup quota
2. Set soft-limit to 80%
3. Do some intense IO on the mount point, add/delete data
   Maybe run 10-20 instances of dbench
4. quota list alternatively prints ridiculous values

[root@boo ~]# gluster volume quota pure_gold list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         30.0GB       80%      77.9GB  0Bytes
[root@boo ~]# gluster volume quota pure_gold list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         30.0GB       80%   16384.0PB  31.2GB
[root@boo ~]# gluster volume quota pure_gold list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         30.0GB       80%      77.9GB  0Bytes
[root@boo ~]# gluster volume quota pure_gold list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         30.0GB       80%   16384.0PB  31.2GB
[root@boo ~]#
Comment 5 Sachidananda Urs 2013-10-07 06:55:54 EDT
dbench cli:

dbench -t 300 -c /mnt/fuse//11408/dbench/client.txt -s -S 10
Comment 6 Raghavendra G 2013-10-16 06:51:09 EDT
I tried reproducing the test case with the steps given. However, am not able to reproduce the issue.

[root@vm2 mnt]#  /opt/qa/tools/dbench -t  300 -c /opt/qa/tools/client.txt -s -S 10;


 Operation      Count    AvgLat    MaxLat
 ----------------------------------------
 NTCreateX      26473    37.432  1258.044
 Close          19093     1.743  1165.775
 Rename          1080    98.814   221.064
 Unlink          5587    19.179   238.631
 Qpathinfo      23772    19.884  1205.118
 Qfileinfo       3964     0.568    17.505
 Qfsinfo         4422     7.237   523.165
 Sfileinfo       2010     5.429    26.634
 Find            9104   112.855  1269.385
 WriteX         12582    11.840  1278.146
 ReadX          40123     1.391    44.671
 LockX             80     4.217    13.789
 UnlockX           80     4.001    10.956
 Flush           1752     6.362   181.152

Throughput 2.71531 MB/sec (sync open) (sync dirs)  10 clients  10 procs  max_latency=1278.152 ms
[root@vm2 mnt]# gluster volume quota dist list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                         10.0GB       80%      0Bytes  10.0GB


Can you please confirm that the bug is not reproducible?
Comment 7 Lalatendu Mohanty 2013-10-17 05:40:35 EDT
I saw this issue on "glusterfs-server-3.4.0.35rhs-1.el6rhs.x86_64" 


[root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dis-rep-1                               100.0MB       80%   16384.0PB 100.0MB
/fuse                                      1.0GB       80%      49.6MB 974.4MB
/fuse/subdir1                            500.0MB       80%      0Bytes 500.0MB

Steps to reproduce

1. Create a directory on gluster mount point
2. Set the quota on it (e.g: 1GB)
3. Run file I/O till you get quota exceeded error

Step 4 and 5 should be done in parallel 

4. delete the contents of the directory
5. Set the quota on the directory to a lower value (e.g: 100MB)  

Sosreports are available at http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1010248/ as I am not able to attach them to bugzilla as one of the report is more than 20MB
Comment 8 Gowrishankar Rajaiyan 2013-10-17 05:42:54 EDT
Per bug triage 10/17.
Comment 9 Lalatendu Mohanty 2013-10-17 05:59:34 EDT
For work around of the issue I tried enabling/disabling quota on the volume but even after quota disable/enable on the volume, setting quota on the directory shows "Used field shows data in "PB"" . See the flow of the exact commands

[root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dis-rep-1                               100.0MB       80%   16384.0PB 100.0MB
/fuse                                      1.0GB       80%      49.6MB 974.4MB
/fuse/subdir1                            500.0MB       80%      0Bytes 500.0MB
 
[root@bvt-rhs1 ~]# gluster v quota dis-rep-1 disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success

[root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list
quota command failed : Quota is not enabled on volume dis-rep-1

[root@bvt-rhs1 ~]# gluster v quota dis-rep-1 enable
volume quota : success

[root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list
quota: No quota configured on volume dis-rep-1

[root@bvt-rhs1 ~]# gluster v quota dis-rep-1 limit-usage /dis-rep-1 100MB
volume quota : success

[root@bvt-rhs1 ~]# gluster v quota dis-rep-1 list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/dis-rep-1                               100.0MB       80%   16384.0PB 100.0MB
Comment 10 Raghavendra G 2013-10-24 03:26:26 EDT
The code which cleans up quota xattrs has not gone in the build you are using. Hence disabling and enabling quota is not making the issue go away.

The patch itself can be found at:
https://code.engineering.redhat.com/gerrit/#/c/14463/

As far as the reproducibility of the issue, I tried reproducing by running dbench, but was unable to hit the issue. The issue may as well be a race-condition which cannot be reproduced consistently. Is the bug consistently reproducible on your setup? A test case which can consistently reproduce this issue will be of great help.

regards,
Raghavendra.
Comment 11 Raghavendra G 2013-10-24 03:28:21 EDT
Sorry, missed out Lala's comment on reproduciblity. Will get back trying out the steps given.
Comment 12 Gowrishankar Rajaiyan 2013-11-06 07:29:48 EST
[root@server1 ~]# gluster vol quota shanks-quota list 
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        100.0GB       80%   16384.0PB 100.0GB
/shanks/Music                             10.0GB       80%      0Bytes  10.0GB
[root@server1 ~]# 


Seeing this with glusterfs-server-3.4.0.38rhs-1.el6rhs.x86_64
Comment 13 Gowrishankar Rajaiyan 2013-11-06 12:37:27 EST
And again: 

[root@server1 ~]# gluster vol quota shanks-quota list 
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        200.0GB       80%   16384.0PB 203.0GB
/shanks/Downloads                         50.0GB       80%   16384.0PB  53.0GB
[root@server1 ~]# 

Version: glusterfs-server-3.4.0.39rhs-1.el6rhs.x86_64
Comment 14 Raghavendra G 2013-11-06 21:55:37 EST
3.4.0.38 is more susceptible for this behaviour, since there was a regression introduced in rename handling. The patch which fixes this particular regression can be found at:

https://code.engineering.redhat.com/gerrit/#/c/15125/
Comment 15 Gowrishankar Rajaiyan 2013-11-06 23:54:28 EST
I am still seeing this with version glusterfs-server-3.4.0.39rhs-1.el6rhs.x86_64
Comment 16 Gowrishankar Rajaiyan 2013-11-07 00:27:29 EST
Steps I followed to hit this:

1. Existing data on directory. (/home/shanks/Downloads in this case)
2. Enabled quota
3. set limit on / (200G in this case)
4. set limit on /shanks/Downloads (50G in this case)
5. rm -fr data in /home/shanks/Downloads at client.
6. quota list at server

Version: glusterfs-server-3.4.0.39rhs-1.el6rhs.x86_64
Comment 17 Raghavendra G 2013-11-07 00:49:48 EST
(In reply to Gowrishankar Rajaiyan from comment #15)
Shanks,

> I am still seeing this with version
> glusterfs-server-3.4.0.39rhs-1.el6rhs.x86_64

The fix has not gone into glusterfs-server-3.4.0.39rhs yet.
Comment 18 Ben Turner 2013-11-18 10:44:17 EST
Verified on glusterfs-3.4.0.44rhs-1.el6rhs.x86_64.rpm.
Comment 19 errata-xmlrpc 2013-11-27 10:40:21 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html

Note You need to log in before you can comment on or make changes to this bug.