Bug 1300307 - [tiering]: Quota object limits not adhered to, in a tiered volume
Summary: [tiering]: Quota object limits not adhered to, in a tiered volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Raghavendra G
QA Contact: Vinayak Papnoi
URL:
Whiteboard:
Depends On:
Blocks: 1302257 1306138
TreeView+ depends on / blocked
 
Reported: 2016-01-20 12:53 UTC by Sweta Anandpara
Modified: 2018-11-19 08:40 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1302257 (view as bug list)
Environment:
Last Closed: 2018-11-19 08:40:51 UTC
Embargoed:


Attachments (Terms of Use)

Description Sweta Anandpara 2016-01-20 12:53:56 UTC
Description of problem:
Had a 8 node cluster, with 2*(4+2) disperse volume as cold tier and 2*2 as hot tier. Had quota object limits set for a subdirectory on the disperse volume  before attaching a tier. Created files from fuse mountpoint and the expected 'quota limit exceeded' error did not occur even after the specified number of objects(files/dirs) were created. All creations were successful.

Version-Release number of selected component (if applicable):
glusterfs-3.7.5-15.el7rhgs.x86_64

How reproducible: 1:1


Steps that I followed:
1. Created a 2*(4+2) disperse volume 'nash'
2. Enabled quota on 'nash'
3. Mounted the volume over fuse, and created directories 'dir1' and 'dir2'
4. Created subdirectories and files under dir1
5. Set the quota limit-objects for the path '/dir1' on the server to 40
6. Set the soft-timeout and hard-timeout to 0
7. Set the limit-usage on '/' and '/dir1'
8. Created about 30 files on the mountpoint, which went through successfully
9. Attached a 2*2 volume as hot tier and created 15 files over the same fuse mountpoint - which again went through successfully
10. gluster v quota o2 list-objects 

Actual results:
Step 8 is successful in creating all 15 files
Step 10 fails to show the correct value

Expected results:
Step 8 should fail after the object-limit 40 is met
Step 10 should show the limit set to 40


Additional info:

[root@dhcp43-48 o2]# 
[root@dhcp43-48 o2]# 
[root@dhcp43-48 o2]# gluster peer status
Number of Peers: 7

Hostname: dhcp43-174.lab.eng.blr.redhat.com
Uuid: a9308802-eb28-4c95-9aac-d1b1ef49a2a4
State: Peer in Cluster (Connected)

Hostname: 10.70.42.117
Uuid: d3c7d1be-1d63-4a2b-9513-e712a47380b6
State: Peer in Cluster (Connected)

Hostname: 10.70.42.17
Uuid: 28c4955c-9ced-4589-a7ce-ea21bf55a61a
State: Peer in Cluster (Connected)

Hostname: 10.70.42.133
Uuid: b4bb2936-453b-495f-b577-740e7f954cca
State: Peer in Cluster (Connected)

Hostname: 10.70.42.113
Uuid: 4fcd4342-579f-4249-8d84-31ede8b13cab
State: Peer in Cluster (Connected)

Hostname: 10.70.43.197
Uuid: 8b186eeb-e2d8-41d6-845d-7e54deffed11
State: Peer in Cluster (Connected)

Hostname: 10.70.42.116
Uuid: b4677c8a-6e93-4fba-87d4-f961eb29fa75
State: Peer in Cluster (Connected)
[root@dhcp43-48 o2]# 
[root@dhcp43-48 o2]# 
[root@dhcp43-48 o2]# cd
[root@dhcp43-48 ~]# 
[root@dhcp43-48 ~]# 
[root@dhcp43-48 ~]# 
[root@dhcp43-48 ~]# gluster v list
nash
nash_clone
o2
ozone
[root@dhcp43-48 ~]# 
[root@dhcp43-48 ~]# 
[root@dhcp43-48 ~]# gluster v info nash
 
Volume Name: nash
Type: Tier
Volume ID: 0c5c3ee5-d20a-4d23-9cc1-30deda46c34c
Status: Started
Number of Bricks: 22
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.43.197:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick1/nash
Brick2: 10.70.43.174:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick2/nash
Brick3: 10.70.43.197:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick3/nash
Brick4: 10.70.43.174:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick4/nash
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 6 x 3 = 18
Brick5: 10.70.43.48:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick5/nash
Brick6: 10.70.42.113:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick6/nash
Brick7: 10.70.42.116:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick7/nash
Brick8: 10.70.42.117:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick8/nash
Brick9: 10.70.42.17:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick9/nash
Brick10: 10.70.42.133:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick10/nash
Brick11: 10.70.43.48:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick11/nash
Brick12: 10.70.42.113:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick12/nash
Brick13: 10.70.42.116:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick13/nash
Brick14: 10.70.42.117:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick14/nash
Brick15: 10.70.42.17:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick15/nash
Brick16: 10.70.42.133:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick16/nash
Brick17: 10.70.43.48:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick17/nash
Brick18: 10.70.42.113:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick18/nash
Brick19: 10.70.42.116:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick19/nash
Brick20: 10.70.42.117:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick20/nash
Brick21: 10.70.42.17:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick21/nash
Brick22: 10.70.42.133:/run/gluster/snaps/37ddc9fdc0c6417f9e570ea76dfc034e/brick22/nash
Options Reconfigured:
features.barrier: disable
features.uss: enable
cluster.tier-promote-frequency: 100
cluster.watermark-low: 40
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
cluster.tier-mode: cache
features.ctr-enabled: on
performance.readdir-ahead: on
[root@dhcp43-48 ~]# 
[root@dhcp43-48 ~]# gluster v status nash
Status of volume: nash
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick 10.70.43.197:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick1/nash     49166     0          Y       26705
Brick 10.70.43.174:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick2/nash     49166     0          Y       28325
Brick 10.70.43.197:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick3/nash     49167     0          Y       26724
Brick 10.70.43.174:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick4/nash     49167     0          Y       28344
Cold Bricks:
Brick 10.70.43.48:/run/gluster/snaps/37ddc9
fdc0c6417f9e570ea76dfc034e/brick5/nash      49176     0          Y       32759
Brick 10.70.42.113:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick6/nash     49176     0          Y       28213
Brick 10.70.42.116:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick7/nash     49176     0          Y       10844
Brick 10.70.42.117:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick8/nash     49176     0          Y       28705
Brick 10.70.42.17:/run/gluster/snaps/37ddc9
fdc0c6417f9e570ea76dfc034e/brick9/nash      49176     0          Y       28808
Brick 10.70.42.133:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick10/nash    49177     0          Y       28867
Brick 10.70.43.48:/run/gluster/snaps/37ddc9
fdc0c6417f9e570ea76dfc034e/brick11/nash     49177     0          Y       322  
Brick 10.70.42.113:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick12/nash    49177     0          Y       28232
Brick 10.70.42.116:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick13/nash    49177     0          Y       10863
Brick 10.70.42.117:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick14/nash    49177     0          Y       28724
Brick 10.70.42.17:/run/gluster/snaps/37ddc9
fdc0c6417f9e570ea76dfc034e/brick15/nash     49177     0          Y       28827
Brick 10.70.42.133:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick16/nash    49178     0          Y       28886
Brick 10.70.43.48:/run/gluster/snaps/37ddc9
fdc0c6417f9e570ea76dfc034e/brick17/nash     49178     0          Y       344  
Brick 10.70.42.113:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick18/nash    49178     0          Y       28251
Brick 10.70.42.116:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick19/nash    49178     0          Y       10882
Brick 10.70.42.117:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick20/nash    49178     0          Y       28745
Brick 10.70.42.17:/run/gluster/snaps/37ddc9
fdc0c6417f9e570ea76dfc034e/brick21/nash     49178     0          Y       28846
Brick 10.70.42.133:/run/gluster/snaps/37ddc
9fdc0c6417f9e570ea76dfc034e/brick22/nash    49179     0          Y       28905
Snapshot Daemon on localhost                49194     0          Y       373  
NFS Server on localhost                     2049      0          Y       5978 
Self-heal Daemon on localhost               N/A       N/A        Y       5986 
Quota Daemon on localhost                   N/A       N/A        Y       5994 
Snapshot Daemon on 10.70.42.133             49188     0          Y       28925
NFS Server on 10.70.42.133                  2049      0          Y       747  
Self-heal Daemon on 10.70.42.133            N/A       N/A        Y       755  
Quota Daemon on 10.70.42.133                N/A       N/A        Y       763  
Snapshot Daemon on 10.70.42.17              49187     0          Y       28866
NFS Server on 10.70.42.17                   2049      0          Y       632  
Self-heal Daemon on 10.70.42.17             N/A       N/A        Y       641  
Quota Daemon on 10.70.42.17                 N/A       N/A        Y       650  
Snapshot Daemon on 10.70.43.197             49176     0          Y       26744
NFS Server on 10.70.43.197                  2049      0          Y       30941
Self-heal Daemon on 10.70.43.197            N/A       N/A        Y       30949
Quota Daemon on 10.70.43.197                N/A       N/A        Y       30957
Snapshot Daemon on 10.70.42.116             49187     0          Y       10902
NFS Server on 10.70.42.116                  2049      0          Y       15136
Self-heal Daemon on 10.70.42.116            N/A       N/A        Y       15144
Quota Daemon on 10.70.42.116                N/A       N/A        Y       15152
Snapshot Daemon on dhcp43-174.lab.eng.blr.r
edhat.com                                   49176     0          Y       28364
NFS Server on dhcp43-174.lab.eng.blr.redhat
.com                                        2049      0          Y       32580
Self-heal Daemon on dhcp43-174.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       32588
Quota Daemon on dhcp43-174.lab.eng.blr.redh
at.com                                      N/A       N/A        Y       32604
Snapshot Daemon on 10.70.42.117             49187     0          Y       28771
NFS Server on 10.70.42.117                  2049      0          Y       533  
Self-heal Daemon on 10.70.42.117            N/A       N/A        Y       542  
Quota Daemon on 10.70.42.117                N/A       N/A        Y       550  
Snapshot Daemon on 10.70.42.113             49187     0          Y       28271
NFS Server on 10.70.42.113                  2049      0          Y       32440
Self-heal Daemon on 10.70.42.113            N/A       N/A        Y       32448
Quota Daemon on 10.70.42.113                N/A       N/A        Y       32456
 
Task Status of Volume nash
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : 11394343-1c20-49de-b2ca-1b4ee4ff86d4
Status               : in progress         
 
[root@dhcp43-48 ~]# 
[root@dhcp43-48 ~]# 
[root@dhcp43-48 o2]# gluster v quota o2 list /dir2
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/dir2                                      4.0GB     80%(3.2GB)   0Bytes   4.0GB              No                   No
[root@dhcp43-48 o2]#
[root@dhcp43-48 o2]#
[root@dhcp43-48 o2]# gluster v quota o2 list /
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/                                         20.0GB     80%(16.0GB)   60.5KB  20.0GB              No                   No
[root@dhcp43-48 o2]#
[root@dhcp43-48 o2]# gluster v quota o2 list-objects /dir1
                  Path                   Hard-limit   Soft-limit      Files       Dirs     Available  Soft-limit exceeded? Hard-limit exceeded?
-----------------------------------------------------------------------------------------------------------------------------------------------
/dir1                                    Limit not set
[root@dhcp43-48 o2]#
[root@dhcp43-48 o2]#
[root@dhcp43-48 ~]# 
[root@dhcp43-48 ~]# rpm -qa | grep gluster
glusterfs-libs-3.7.5-15.el7rhgs.x86_64
glusterfs-3.7.5-15.el7rhgs.x86_64
glusterfs-api-3.7.5-15.el7rhgs.x86_64
glusterfs-fuse-3.7.5-15.el7rhgs.x86_64
glusterfs-rdma-3.7.5-15.el7rhgs.x86_64
gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-15.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.3.el7rhgs.noarch
gluster-nagios-common-0.2.3-1.el7rhgs.noarch
python-gluster-3.7.5-15.el7rhgs.noarch
glusterfs-client-xlators-3.7.5-15.el7rhgs.x86_64
glusterfs-cli-3.7.5-15.el7rhgs.x86_64
glusterfs-server-3.7.5-15.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-15.el7rhgs.x86_64
[root@dhcp43-48 ~]#

Comment 2 Sweta Anandpara 2016-01-20 13:01:39 UTC
Sosreports copied at: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1300307/

Comment 3 Nag Pavan Chilakam 2016-01-20 13:20:25 UTC
sosreports @ 
[nchilaka@rhsqe-repo nchilaka]$ pwd
/home/repo/sosreports/nchilaka/bug.1300301

Comment 5 Manikandan 2016-01-27 10:13:09 UTC
Hi,

Even with a plain distribute volume, inode-quota's are not working properly whenever an add-brick is done. It is because the xattr, "limit-objects" was not being healed whenever a healing operation is done. 

So, moving the component to quota.

Comment 6 Vijaikumar Mallikarjuna 2016-02-02 11:02:23 UTC
Patch submitted upstream: http://review.gluster.org/#/c/13299/

Comment 9 Mike McCune 2016-03-28 23:10:48 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 10 Vijaikumar Mallikarjuna 2016-04-07 03:09:42 UTC
Fix "http://review.gluster.org/#/c/13299/" is available in 3.1.3 as part of rebase

Comment 11 Atin Mukherjee 2016-08-03 05:22:04 UTC
Can this be tested with 3.1.3 and marked as closed with current release?

Comment 12 Sweta Anandpara 2016-08-03 08:09:00 UTC
Yes, it can.

Will verify this in the coming week on a plain/tiered setup and update the results.

Comment 13 Atin Mukherjee 2016-08-16 14:55:36 UTC
Based on comment 10, moving it to ON_QA


Note You need to log in before you can comment on or make changes to this bug.