Bug 977544 - gluster volume quota limit-usage now takes 40 seconds per command execution
Summary: gluster volume quota limit-usage now takes 40 seconds per command execution
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: Unspecified
OS: Unspecified
low
high
Target Milestone: ---
: ---
Assignee: Krutika Dhananjay
QA Contact: Saurabh
URL:
Whiteboard:
: 1021089 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-24 20:48 UTC by Ben England
Modified: 2016-01-19 06:12 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.4.0.36rhs-1
Doc Type: Bug Fix
Doc Text:
Previously, quota limit-usage command's quota configuration file updating logic had high latency. Now, with this update, the quota configuration file updating logic is improved and takes much lesser time.
Clone Of:
Environment:
Last Closed: 2013-11-27 15:25:08 UTC
Embargoed:
vagarwal: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1769 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #1 2013-11-27 20:17:39 UTC

Description Ben England 2013-06-24 20:48:58 UTC
Description of problem:

In a system with a large number of quotas to set, the gluster volume quota limit-usage command takes 2 seconds per command, so for example if you needed 60000 quotas set up then it would take over a day.

Furthermore it makes the gluster volume info command pretty useless.  If there are only a few quotas set, then perhaps it makes sense to display them inline, otherwise you might want to just suggest that they run a specific command to display quotas.  Quota limits should be displayed after other volume parameters.


Version-Release number of selected component (if applicable):

glusterfs-3.4.0.11rhs-2.el6rhs.x86_64
RHEL6.4
 
How reproducible:

every time

Steps to Reproduce:
1.  create a gluster volume with structure below
2.  create 120 directories
3.  try to set limit-usage on each

Actual results:

It takes about 5 minutes

Expected results:

It should take only a few seconds (long enough to set extended attributes on the directories).

Additional info:

[root@gprfs045 perftest]# time gluster volume quota perftest limit-usage /smf.d/file_srcdir/gprfc096/d15 1GB
limit set on /smf.d/file_srcdir/gprfc096/d15

real    0m2.209s
user    0m0.088s
sys     0m0.031s
[root@gprfs045 perftest]# gluster volume info
 
Volume Name: perftest
Type: Distributed-Replicate
Volume ID: c643fc59-48d2-45fc-91d0-816dfca96830
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gprfs045-10ge:/mnt/brick0/brick
Brick2: gprfs046-10ge:/mnt/brick0/brick
Brick3: gprfs047-10ge:/mnt/brick0/brick
Brick4: gprfs048-10ge:/mnt/brick0/brick
Options Reconfigured:
features.limit-usage: /smf.d/file_srcdir/gprfc077/d00:1GB,/smf.d/file_srcdir/gprfc077/d01:1GB,/smf.d/file_srcdir/gprfc077/d02:1GB,/smf.d/file_srcdir/gprfc077/d03:1GB,/smf.d/file_srcdir/gprfc077/d04:1GB,/smf.d/file_srcdir/gprfc077/d05:1GB,/smf.d/file_srcdir/gprfc077/d06:1GB,/smf.d/file_srcdir/gprfc077/d07:1GB,/smf.d/file_srcdir/gprfc077/d08:1GB,/smf.d/file_srcdir/gprfc077/d09:1GB,/smf.d/file_srcdir/gprfc077/d10:1GB,/smf.d/file_srcdir/gprfc077/d11:1GB,/smf.d/file_srcdir/gprfc077/d12:1GB,/smf.d/file_srcdir/gprfc077/d13:1GB,/smf.d/file_srcdir/gprfc077/d14:1GB,/smf.d/file_srcdir/gprfc077/d15:1GB,/smf.d/file_srcdir/gprfc089/d00:1GB,/smf.d/file_srcdir/gprfc089/d01:1GB,/smf.d/file_srcdir/gprfc089/d02:1GB,/smf.d/file_srcdir/gprfc089/d03:1GB,/smf.d/file_srcdir/gprfc089/d05:1GB,/smf.d/file_srcdir/gprfc089/d06:1GB,/smf.d/file_srcdir/gprfc089/d07:1GB,/smf.d/file_srcdir/gprfc089/d08:1GB,/smf.d/file_srcdir/gprfc089/d09:1GB,/smf.d/file_srcdir/gprfc089/d10:1GB,/smf.d/file_srcdir/gprfc089/d11:1GB,/smf.d/file_srcdir/gprfc089/d12:1GB,/smf.d/file_srcdir/gprfc089/d13:1GB,/smf.d/file_srcdir/gprfc089/d14:1GB,/smf.d/file_srcdir/gprfc089/d15:1GB,/smf.d/file_srcdir/gprfc090/d00:1GB,/smf.d/file_srcdir/gprfc090/d01:1GB,/smf.d/file_srcdir/gprfc090/d02:1GB,/smf.d/file_srcdir/gprfc090/d03:1GB,/smf.d/file_srcdir/gprfc090/d04:1GB,/smf.d/file_srcdir/gprfc090/d05:1GB,/smf.d/file_srcdir/gprfc090/d06:1GB,/smf.d/file_srcdir/gprfc090/d07:1GB,/smf.d/file_srcdir/gprfc090/d08:1GB,/smf.d/file_srcdir/gprfc090/d09:1GB,/smf.d/file_srcdir/gprfc090/d10:1GB,/smf.d/file_srcdir/gprfc090/d11:1GB,/smf.d/file_srcdir/gprfc090/d12:1GB,/smf.d/file_srcdir/gprfc090/d13:1GB,/smf.d/file_srcdir/gprfc090/d14:1GB,/smf.d/file_srcdir/gprfc090/d15:1GB,/smf.d/file_srcdir/gprfc091/d00:1GB,/smf.d/file_srcdir/gprfc091/d01:1GB,/smf.d/file_srcdir/gprfc091/d02:1GB,/smf.d/file_srcdir/gprfc091/d03:1GB,/smf.d/file_srcdir/gprfc091/d04:1GB,/smf.d/file_srcdir/gprfc091/d05:1GB,/smf.d/file_srcdir/gprfc091/d06:1GB,/smf.d/file_srcdir/gprfc091/d07:1GB,/smf.d/file_srcdir/gprfc091/d08:1GB,/smf.d/file_srcdir/gprfc091/d09:1GB,/smf.d/file_srcdir/gprfc091/d10:1GB,/smf.d/file_srcdir/gprfc091/d11:1GB,/smf.d/file_srcdir/gprfc091/d12:1GB,/smf.d/file_srcdir/gprfc091/d13:1GB,/smf.d/file_srcdir/gprfc091/d14:1GB,/smf.d/file_srcdir/gprfc091/d15:1GB,/smf.d/file_srcdir/gprfc092/d00:1GB,/smf.d/file_srcdir/gprfc092/d01:1GB,/smf.d/file_srcdir/gprfc092/d02:1GB,/smf.d/file_srcdir/gprfc092/d03:1GB,/smf.d/file_srcdir/gprfc092/d04:1GB,/smf.d/file_srcdir/gprfc092/d05:1GB,/smf.d/file_srcdir/gprfc092/d06:1GB,/smf.d/file_srcdir/gprfc092/d07:1GB,/smf.d/file_srcdir/gprfc092/d08:1GB,/smf.d/file_srcdir/gprfc092/d09:1GB,/smf.d/file_srcdir/gprfc092/d10:1GB,/smf.d/file_srcdir/gprfc092/d11:1GB,/smf.d/file_srcdir/gprfc092/d12:1GB,/smf.d/file_srcdir/gprfc092/d13:1GB,/smf.d/file_srcdir/gprfc092/d14:1GB,/smf.d/file_srcdir/gprfc092/d15:1GB,/smf.d/file_srcdir/gprfc094/d00:1GB,/smf.d/file_srcdir/gprfc094/d01:1GB,/smf.d/file_srcdir/gprfc094/d02:1GB,/smf.d/file_srcdir/gprfc094/d03:1GB,/smf.d/file_srcdir/gprfc094/d04:1GB,/smf.d/file_srcdir/gprfc094/d05:1GB,/smf.d/file_srcdir/gprfc094/d06:1GB,/smf.d/file_srcdir/gprfc094/d07:1GB,/smf.d/file_srcdir/gprfc094/d08:1GB,/smf.d/file_srcdir/gprfc094/d09:1GB,/smf.d/file_srcdir/gprfc094/d10:1GB,/smf.d/file_srcdir/gprfc094/d11:1GB,/smf.d/file_srcdir/gprfc094/d12:1GB,/smf.d/file_srcdir/gprfc094/d13:1GB,/smf.d/file_srcdir/gprfc094/d14:1GB,/smf.d/file_srcdir/gprfc094/d15:1GB,/smf.d/file_srcdir/gprfc095/d00:1GB,/smf.d/file_srcdir/gprfc095/d01:1GB,/smf.d/file_srcdir/gprfc095/d02:1GB,/smf.d/file_srcdir/gprfc095/d03:1GB,/smf.d/file_srcdir/gprfc095/d04:1GB,/smf.d/file_srcdir/gprfc095/d05:1GB,/smf.d/file_srcdir/gprfc095/d06:1GB,/smf.d/file_srcdir/gprfc095/d07:1GB,/smf.d/file_srcdir/gprfc095/d08:1GB,/smf.d/file_srcdir/gprfc095/d09:1GB,/smf.d/file_srcdir/gprfc095/d10:1GB,/smf.d/file_srcdir/gprfc095/d11:1GB,/smf.d/file_srcdir/gprfc095/d12:1GB,/smf.d/file_srcdir/gprfc095/d13:1GB,/smf.d/file_srcdir/gprfc095/d14:1GB,/smf.d/file_srcdir/gprfc095/d15:1GB,/smf.d/file_srcdir/gprfc096/d00:1GB,/smf.d/file_srcdir/gprfc096/d01:1GB,/smf.d/file_srcdir/gprfc096/d02:1GB,/smf.d/file_srcdir/gprfc096/d03:1GB,/smf.d/file_srcdir/gprfc096/d04:1GB,/smf.d/file_srcdir/gprfc096/d05:1GB,/smf.d/file_srcdir/gprfc096/d06:1GB,/smf.d/file_srcdir/gprfc096/d07:1GB,/smf.d/file_srcdir/gprfc096/d08:1GB,/smf.d/file_srcdir/gprfc096/d09:1GB,/smf.d/file_srcdir/gprfc096/d10:1GB,/smf.d/file_srcdir/gprfc096/d11:1GB,/smf.d/file_srcdir/gprfc096/d12:1GB,/smf.d/file_srcdir/gprfc096/d13:1GB,/smf.d/file_srcdir/gprfc096/d14:1GB,/smf.d/file_srcdir/gprfc089/d04:1GB,/smf.d/file_srcdir/gprfc096/d15:1GB
features.quota: on

Comment 2 Krutika Dhananjay 2013-07-16 10:33:17 UTC
Could you please let me know what your expected time to complete one 'quota limit-usage' transaction is, if 2 seconds is a long time?

Comment 3 Ben England 2013-07-18 12:07:38 UTC
Sayan asked if we could support 60000 quotas like NetApp, that's what got me thinking about this.  So IMHO the quota command should just be marking the directory as having a quota, not doing all the processing to calculate the space used by the directory at that time.  Just marking the directory should not take more than 1/4 sec I would guess (enough time to set extended attributes on the directory).  A background process could calculate how much space is currently in use in the directory.

Comment 5 spandura 2013-10-17 13:16:55 UTC
Tested this issue on build "glusterfs 3.4.0.35rhs built on Oct 15 2013 14:06:04" 
 
On a distribute-replicate volume having 1000 top-level directories , setting limit-usage on each directory is taking 40S. 

root@rhs-client11 [Oct-17-2013-13:09:15] >time gluster v quota vol_dis_rep limit-usage /user1 10GB
volume quota : success

real	0m37.559s
user	0m0.097s
sys	0m0.016s

root@rhs-client11 [Oct-17-2013-13:10:08] >time gluster v quota vol_dis_rep limit-usage /user2 10GB
volume quota : success

real	0m37.716s
user	0m0.099s
sys	0m0.014s

root@rhs-client11 [Oct-17-2013-13:12:07] >time gluster v quota vol_dis_rep limit-usage /user3 10GB
volume quota : success

real	0m37.862s
user	0m0.098s
sys	0m0.017s


root@rhs-client12 [Oct-17-2013-12:53:52] >gluster v info vol_dis_rep
 
Volume Name: vol_dis_rep
Type: Distributed-Replicate
Volume ID: 7f8013d4-dd04-47ee-8e7d-f096ac2a1597
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: rhs-client11:/rhs/bricks/brick1
Brick2: rhs-client12:/rhs/bricks/brick2
Brick3: rhs-client13:/rhs/bricks/brick3
Brick4: rhs-client14:/rhs/bricks/brick4
Brick5: rhs-client11:/rhs/bricks/brick5
Brick6: rhs-client12:/rhs/bricks/brick6
Brick7: rhs-client13:/rhs/bricks/brick7
Brick8: rhs-client14:/rhs/bricks/brick8
Brick9: rhs-client11:/rhs/bricks/brick9
Brick10: rhs-client12:/rhs/bricks/brick10
Brick11: rhs-client13:/rhs/bricks/brick11
Brick12: rhs-client14:/rhs/bricks/brick12
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
features.quota: on

===========================================
Meminfo of the machine:- 
===========================================
root@rhs-client11 [Oct-17-2013-13:13:17] >free -tg
             total       used       free     shared    buffers     cached
Mem:            15         13          1          0          0         11
-/+ buffers/cache:          2         13
Swap:            7          0          7
Total:          23         13          9

Comment 6 Krutika Dhananjay 2013-10-18 09:25:47 UTC
ROOT CAUSE OF THE PROBLEM DESCRIBED IN COMMENT #5:
-------------------------------------------------

When limit-usage is invoked for a particular path, glusterd needs to store the gfids of the paths in quota.conf. Now, to eliminate duplicate entries in quota.conf (ex: when quota limit is set for the second time on the same directory, we don't want glusterd to be entering two copies of the gfid in quota.conf), glusterd reads a gfid at a time from quota.conf, compares the gfid of the gfid read with the gfid of the path on which limit is being set in the current operation.

Therefore, the number of reads = number of entries in quota.conf
the number of comparisons <= number of entries in quota.conf

This means, greater the number of entries in quota.conf, slower the operation gets.

Comment 11 Krutika Dhananjay 2013-10-23 13:44:57 UTC
*** Bug 1021089 has been marked as a duplicate of this bug. ***

Comment 12 Saurabh 2013-10-24 13:35:30 UTC
Ben,

I tried to execute some test related to this bug, for glusterfs.3.4.0.36rhs

The test basically is about setting quota limit on 64000 directories of one volume in a four node cluster.

The 64000 thousand directories are created in the root of the volume.

So, in order to set the limit on these directories I have done in batches of 10000 directories in a loop, and executing the same loop for 7 times, last for loop being for 4000 directories.

For each 10000 directory loop, I captured time that it had taken, sharing the same with you,

for 1-10000
real	34m37.672s
user	21m35.960s
sys	4m29.646s

for 10001-20000
real	35m43.942s
user	21m33.960s
sys	4m35.986s

for 20001-30000
real	36m47.739s
user	21m33.709s
sys	4m38.902s

for 30001-40000
real	37m30.599s
user	21m28.254s
sys	4m35.139s

for 40001-50000
real	38m35.286s
user	21m30.369s
sys	4m39.452s

for 50001-60000
real	39m41.455s
user	21m31.950s
sys	4m40.954s

for 60001-64000,
real	16m8.127s
user	8m36.569s
sys	1m52.466s


Also, for listing them I have collected the taken time,

using the command "time gluster volume quota $volname list"

listing 10000 takes,
real	0m23.917s
user	0m1.418s
sys	0m1.386s

listing 30000 takes,
real	1m40.601s
user	0m4.975s
sys	0m4.862s

listing 40000 takes,
real	2m15.123s
user	0m5.017s
sys	0m4.659s

listing 50000 takes,
real	2m52.655s
user	0m6.249s
sys	0m5.633s

listing 60000 takes,
real	3m32.462s
user	0m8.202s
sys	0m6.909s

listing 64000 takes,
real	3m36.193s
user	0m9.551s
sys	0m8.665s

Want clarification from you, if this test suffices for verifying the fix of the original issue?
Or, if you can provide suggestions to verify it.

Note:-There was no data in the directories

Comment 13 Ben England 2013-10-29 17:14:00 UTC
Saurabh, thanks for pinging me, I had forgot to check back on this bz.  

Sp here's the worst case for establishing quotas:

for 50001-60000
real	39m41.455s
user	21m31.950s
sys	4m40.954s

that's 2381 seconds for 10000 quotas or 4 quotas/sec, 8x improvement, I think that's pretty good.    This is a very extreme case, typically people do not constantly adjust quotas so it would be a one-time cost for the volume.

Is the gluster volume info command fixed so it doesn't output the quotas right in there?  Is there a different command to output the quotas?

A separate test would be to create a bunch of files and delete them with 60000 quotas.  see https://docspace.corp.redhat.com/docs/DOC-156688 for ideas on how to do that.  I did it with 400 quotas or something like that.

Comment 14 Saurabh 2013-10-30 06:34:48 UTC
(In reply to Ben England from comment #13)
> Saurabh, thanks for pinging me, I had forgot to check back on this bz.  
> 
> Sp here's the worst case for establishing quotas:
> 
> for 50001-60000
> real	39m41.455s
> user	21m31.950s
> sys	4m40.954s
> 
> that's 2381 seconds for 10000 quotas or 4 quotas/sec, 8x improvement, I
> think that's pretty good.    This is a very extreme case, typically people
> do not constantly adjust quotas so it would be a one-time cost for the
> volume.
> 

> Is the gluster volume info command fixed so it doesn't output the quotas
> right in there?  

Saurabh >> yes this fixed, 

Is there a different command to output the quotas?

Saurabh >> in order to get the information about the directories having limit set, one need to use "gluster volume quota $volname list"
or "gluster volume quota $volname list <path>"

> 
> A separate test would be to create a bunch of files and delete them with
> 60000 quotas.  see https://docspace.corp.redhat.com/docs/DOC-156688 for
> ideas on how to do that.  I did it with 400 quotas or something like that.

Comment 15 Saurabh 2013-11-04 11:27:25 UTC
moving it to verified based on commment 12 and comment 13

Comment 16 errata-xmlrpc 2013-11-27 15:25:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html


Note You need to log in before you can comment on or make changes to this bug.