Bug 1178130 - quota: quota list displays double the size of previous value, post heal completion.
Summary: quota: quota list displays double the size of previous value, post heal compl...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.1.0
Assignee: Vijaikumar Mallikarjuna
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1044344 1202842 1223636 1232572 1233117
TreeView+ depends on / blocked
 
Reported: 2015-01-02 13:58 UTC by Saurabh
Modified: 2016-09-17 12:39 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.7.1-5
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1232572 (view as bug list)
Environment:
Last Closed: 2015-07-29 04:37:38 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Saurabh 2015-01-02 13:58:47 UTC
Description of problem:
Created a file of 6GB file inside a directory. The directory is having quota limits set. 
I try to rename the file while the bricks are down. 
once the rename is done, I try to bring the bricks and self-heal starts. 
Post self-heal I find that the consumed quota size is double the size of originaly consumed size. 

This should not happen, as this is just a rename operation

Version-Release number of selected component (if applicable):
glusterfs-3.6.0.40-1.el6rhs.x86_64
issue is seen on RHS2.1u5 build also

How reproducible:
always

Steps to Reproduce:
set RHS nodes [1, 2, 3, 4]
nodes1 and nodes2 make a replica pair
nodes 3 and nodes 4 make a replica pair
1. create a 6x2 volume, start it
2. enbale quota and set limit on the "/"
3. mount the volume over nfs
4. create a directory, say "qa1dir" 
5. chown the directory to a non-root user, say "qa1"
6. login as qa1 on client
7. on server set a limit of 10GB on qa1dir
8. on cleint create a file of 6GB inside qa1dir, say"6GBfile"
9. once file creation is done, bring the bricks down on nodes2 and nodes4
10. during this time try to rename the file.
11. bring the bricks back using the command
    gluster volume start <volname> force
12. wait for heal to finish
13. execute gluster volume quota <volname> list

Actual results:
result of step13,
[root@nfs1 ~]# gluster volume quota vol0 list /dir2
                  Path                   Hard-limit Soft-limit   Used  Available  Soft-limit exceeded?  Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------------
/dir2                                     10.0GB       80%      12.0GB  0Bytes             Yes                  Yes

whereas inside the dir, we just have one file on size 6GB(approx)
[root@rhsauto015 dir2]# ls -l
total 6291456
-rw-r--r--. 1 root root 6442450944 Jan  2  2015 6GBfile-rename

[root@nfs1 ~]# gluster volume info vol0
 
Volume Name: vol0
Type: Distributed-Replicate
Volume ID: d7f65230-efac-409f-9495-9479f986a27c
Status: Started
Snap Volume: no
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.74:/rhs/brick1/d1r1
Brick2: 10.70.37.89:/rhs/brick1/d1r2
Brick3: 10.70.37.91:/rhs/brick1/d2r1
Brick4: 10.70.37.133:/rhs/brick1/d2r2
Brick5: 10.70.37.74:/rhs/brick1/d3r1
Brick6: 10.70.37.89:/rhs/brick1/d3r2
Brick7: 10.70.37.91:/rhs/brick1/d4r1
Brick8: 10.70.37.133:/rhs/brick1/d4r2
Brick9: 10.70.37.74:/rhs/brick1/d5r1
Brick10: 10.70.37.89:/rhs/brick1/d5r2
Brick11: 10.70.37.91:/rhs/brick1/d6r1
Brick12: 10.70.37.133:/rhs/brick1/d6r2
Options Reconfigured:
features.uss: off
features.quota-deem-statfs: on
features.quota: on
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256



Expected results:
The file rename should trigger the "Used" field to be doubled to previous value.

Additional info:

node3 getfattr,


getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/d2r1/dir2/
trusted.afr.vol0-client-2=0x000000000000000000000000
trusted.afr.vol0-client-3=0x000000000000000000000000
trusted.gfid=0xb8ab6c48442f46548123506ce4bf90a0
trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000180000000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000280000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000180000000

[root@nfs3 ~]# getfattr -m . -d -e hex /rhs/brick1/d2r1/dir2/6GBfile-rename 
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/d2r1/dir2/6GBfile-rename
trusted.afr.vol0-client-2=0x000000000000000000000000
trusted.afr.vol0-client-3=0x000000000000000000000000
trusted.gfid=0xfafd9101173f4cacac0e4fa2ab516cc1
trusted.glusterfs.quota.b8ab6c48-442f-4654-8123-506ce4bf90a0.contri=0x0000000180000000
trusted.pgfid.b8ab6c48-442f-4654-8123-506ce4bf90a0=0x00000001


node4 getfattr(this the one on which bricks processes were brought down)

[root@nfs4 ~]# getfattr -m . -d -e hex /rhs/brick1/d2r2/dir2/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/d2r2/dir2/
trusted.afr.vol0-client-2=0x000000000000000000000000
trusted.afr.vol0-client-3=0x000000000000000000000000
trusted.gfid=0xb8ab6c48442f46548123506ce4bf90a0
trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000300000000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000280000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000300000000

[root@nfs4 ~]# 
[root@nfs4 ~]# 
[root@nfs4 ~]# getfattr -m . -d -e hex /rhs/brick1/d2r2/dir2/6GBfile-rename 
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/d2r2/dir2/6GBfile-rename
trusted.afr.vol0-client-2=0x000000000000000000000000
trusted.afr.vol0-client-3=0x000000000000000000000000
trusted.gfid=0xfafd9101173f4cacac0e4fa2ab516cc1
trusted.glusterfs.quota.b8ab6c48-442f-4654-8123-506ce4bf90a0.contri=0x0000000180000000
trusted.pgfid.b8ab6c48-442f-4654-8123-506ce4bf90a0=0x00000001

Comment 2 Vijaikumar Mallikarjuna 2015-02-23 07:55:36 UTC
Patch submitted upstream http://review.gluster.org/#/c/9478/

Comment 3 Vijaikumar Mallikarjuna 2015-03-19 06:17:36 UTC
Patch merged upstream: http://review.gluster.org/#/c/9478/

Comment 5 Vijaikumar Mallikarjuna 2015-03-27 08:52:49 UTC
Patch submitted upstream: http://review.gluster.org/#/c/9954/

Upstream patch #9478 and #9954 fixes the problem

Comment 7 Anil Shah 2015-05-27 12:03:53 UTC
Able to reproduce this issue again on build glusterfs 3.7.0 built.
Hence moving this bug to assigned state.

Comment 10 Vijaikumar Mallikarjuna 2015-06-19 06:00:48 UTC
Patch submitted: https://code.engineering.redhat.com/gerrit/51097

Comment 17 Anil Shah 2015-07-01 13:23:03 UTC
[root@darkknightrises ~]# gluster v quota vol0 list
                  Path                   Hard-limit Soft-limit   Used  Available  Soft-limit exceeded? Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------------
/                                         10.0GB       80%       6.0GB   4.0GB              No                   No


[qa1@client qa1dir]$ ls -ltrh
total 6.0G
-rw-rw-r--. 1 qa1 qa1 6.0G Jul  1 08:53 newtest6GBfile

Bug verified on build glusterfs-3.7.1-6.el6rhs

Comment 19 errata-xmlrpc 2015-07-29 04:37:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.