Bug 1292073

Summary: quota + tiering : files are created even after disk quota exceeds
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Anil Shah <ashah>
Component: quotaAssignee: Vijaikumar Mallikarjuna <vmallika>
Status: CLOSED WORKSFORME QA Contact: Vinayak Papnoi <vpapnoi>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: ashah, byarlaga, rcyriac, rhs-bugs, sanandpa, smohan, storage-qa-internal, vmallika, vpapnoi
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-04-07 03:07:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Anil Shah 2015-12-16 12:20:30 UTC
Description of problem:

After attach tier, able to create files, though disk quota exceeded on volume.

Version-Release number of selected component (if applicable):

[root@rhs001 ~]# rpm -qa | grep glusterfs
glusterfs-3.7.5-11.el7rhgs.x86_64
glusterfs-fuse-3.7.5-11.el7rhgs.x86_64
glusterfs-cli-3.7.5-11.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-11.el7rhgs.x86_64
glusterfs-libs-3.7.5-11.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-11.el7rhgs.x86_64
glusterfs-api-3.7.5-11.el7rhgs.x86_64
glusterfs-server-3.7.5-11.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-11.el7rhgs.x86_64

How reproducible:

1/1

Steps to Reproduce:
1. Create 2*2 distribute replicate volume
2. Do fuse mount
3. Enable quota and set limit-usage
4. Create files on mount point such that volume is 95% full.
5. Attach 2*2 distribute replicate volume
6. Write into file, so that promotion and demotions starts happening.


Actual results:

[root@rhs001 ~]# gluster v quota repvol list
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                          7.0GB     80%(5.6GB)   11.3GB  0Bytes             Yes                  Yes


After promotion and demotion, able to write on mount point, even after disk quotas exceeded.

===============================================================
getfattr for cold tier
======================
[root@rhs001 ~]# getfattr -d -m. -e hex /rhs/brick1/b1/getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/b1/
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000aab517ec
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set.2=0x00000001c0000000ffffffffffffffff
trusted.glusterfs.quota.size.2=0x00000000f3c04e0000000000000000270000000000000003
trusted.glusterfs.volume-id=0x44fd62e2a2e641f0995a14ebf9e39190
trusted.tier.tier-dht=0x00000001000000000000000099a1279c
trusted.tier.tier-dht.commithash=0x3330313231393239383000

============================================================
getfattr for hot brick

[root@rhs001 ~]# getfattr -d -m. -e hex /rhs/brick5/b01/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick5/b01/
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.repvol-client-5=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000007fffe63affffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.2=0x00000000ce40460000000000000000230000000000000003
trusted.glusterfs.volume-id=0x44fd62e2a2e641f0995a14ebf9e39190

Expected results:

Should not be able to write on volume, once disk-quota exceeds.

Additional info:

[root@rhs001 ~]# gluster v info
 
Volume Name: repvol
Type: Tier
Volume ID: 44fd62e2-a2e6-41f0-995a-14ebf9e39190
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.47.3:/rhs/brick5/b04
Brick2: 10.70.47.2:/rhs/brick5/b03
Brick3: 10.70.47.145:/rhs/brick5/b02
Brick4: 10.70.47.143:/rhs/brick5/b01
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: 10.70.47.143:/rhs/brick1/b1
Brick6: 10.70.47.145:/rhs/brick1/b2
Brick7: 10.70.47.2:/rhs/brick1/b3
Brick8: 10.70.47.3:/rhs/brick1/b4
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
features.quota-deem-statfs: on
performance.readdir-ahead: on
features.quota: on
features.inode-quota: on

Comment 2 Vijaikumar Mallikarjuna 2015-12-18 05:52:41 UTC
I found a similar issue with plain DHT volume

1) created a volume with 1 brick
2) enable quota
3) set limit on '/' and a sub dir '/dir'
4) add brick
5) mount a volume and send lookups
6) directories are created in the newly added brick but layout xattr and quota limit xattrs are not healed

Xattrs from existing brick root dir:
# getfattr /var/opt/gluster/bricks/b1/dir
getfattr: Removing leading '/' from absolute path names
# file: var/opt/gluster/bricks/b1/dir
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set.1=0x0000000000000400ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000010000000000000004
trusted.glusterfs.volume-id=0x9b6dc7b03c0d49c6aab4229cbdc1b659


Xattrs from existing brick sub dir:
# getfattr /var/opt/gluster/bricks/b1/dir/dir
getfattr: Removing leading '/' from absolute path names
# file: var/opt/gluster/bricks/b1/dir/dir
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x7888a8aadf7346a389493d667d322898
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000000000000000000000000000010000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set.1=0x0000000000000400ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000010000000000000001


----------------------------
Xattrs from newly added brick root dir:
 # getfattr /var/opt/gluster/bricks/b2/dir
getfattr: Removing leading '/' from absolute path names
# file: var/opt/gluster/bricks/b2/dir
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000010000000000000004
trusted.glusterfs.volume-id=0x9b6dc7b03c0d49c6aab4229cbdc1b659


Xattrs from newly added brick sub dir:
# getfattr /var/opt/gluster/bricks/b2/dir/dir
getfattr: Removing leading '/' from absolute path names
# file: var/opt/gluster/bricks/b2/dir/dir
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x7888a8aadf7346a389493d667d322898
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x000000000000000000000000000000000000000000000001
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000001




Volume info:
Volume Name: vol1
Type: Distribute
Volume ID: 9b6dc7b0-3c0d-49c6-aab4-229cbdc1b659
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rh1:/var/opt/gluster/bricks/b1/dir
Brick2: rh1:/var/opt/gluster/bricks/b2/dir
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on

Comment 3 Vijaikumar Mallikarjuna 2015-12-21 08:59:20 UTC
quota limit xattrs are not healed during directory creation when brick added or tier attached.
Changing the component to DHT

Comment 4 Vijaikumar Mallikarjuna 2016-02-02 11:05:08 UTC
Hi Anil,

Problem I mentioned in comment# 2 was because of my environment problem.
/usr/local/sbin was not set in my PATH environment and hence, hook script was not executed.
 
Could you please try to re-create this problem?

Thanks,
Vijay

Comment 7 Vijaikumar Mallikarjuna 2016-04-07 03:07:15 UTC
I am not able to re-create the problem.
Please file a new bug if the issue is found in 3.1.3