Bug 1423368

Summary: [inode-quota] Deep directory structure creation is not respecting inode-quota limits
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Prasad Desala <tdesala>
Component: quotaAssignee: bugs <bugs>
Status: CLOSED DEFERRED QA Contact: Rahul Hinduja <rhinduja>
Severity: high Docs Contact:
Priority: low    
Version: rhgs-3.2CC: amukherj, atumball, rcyriac, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: Enforcement,Inode_quota
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-11 09:42:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Script used for creating deep directory structure none

Description Prasad Desala 2017-02-17 06:50:30 UTC
Created attachment 1251793 [details]
Script used for creating deep directory structure

Description of problem:
=======================
On mount point '/' directory I had set the inode quota limit to 103 and started creating deep directory structure of length 10 and depth 40. As the inode limit was set to 103, the directory creation should fail when the limit reaches 103 but we are able to create more number of directories than the inode limit.

The directory count in the "list-object" command it shows the file count as '413'.  

[root@Node1 ~]# gluster v quota distrep list-objects
                  Path                   Hard-limit   Soft-limit      Files       Dirs     Available  Soft-limit exceeded? Hard-limit exceeded?
-----------------------------------------------------------------------------------------------------------------------------------------------
/                                              103       80%(82)          0       413           0             Yes                  Yes


[root@dhcp37-190 ~]# getfattr -d -e hex -m . /bricks/brick0/b0/
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick0/b0/
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000555555547ffffffd
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-objects.2=0x0000000000000067ffffffffffffffff
trusted.glusterfs.quota.size.2=0x00000000000000000000000000000000000000000000019d ------> shows 413 created directories
trusted.glusterfs.volume-id=0xec528d0897de4c2c9a36d423610e8c8b

Version-Release number of selected component (if applicable):
3.8.4-14.el7rhgs.x86_64

How reproducible:
always

Steps to Reproduce:
==================
1) Create a distributed-replicate volume and start it.
2) Enable quota on the volume, "gluster volume quota <vol-name> enable".
3) FUSE mount the volume.
4) Set the inode limit on the volume, 
gluster v quota <vol-name> limit-object / 103
5) Using the attached script, start creating deep directory structure
I have started creating directories of length 10 and depth 40 and was able to create till 413 directories.

Actual results:
===============
Deep directories structure creation is not respecting inode-quota limits. I have set the inode limit to 103 but we are able to create directories till 413 directories.

Expected results:
=================
Deep directory creation should fail when it reaches the inode-quota limit.

Additional info:
================
Attached the python script used for creating deep directories structure.

Comment 5 Sanoj Unnikrishnan 2017-11-10 06:52:39 UTC
The inode quota has similar hard and soft timeout such as directory quota.

However, as seen above we can significantly overshoot over the inode quota limits in a small time period. We need to reconsider if the soft and hard timeout need to be honore for inode quota and the performance implications if we dont.

Comment 8 Amar Tumballi 2018-10-11 09:42:45 UTC
This bug was not considered for last 2 releases, and we are not considering it high priority for next 2 releases also.