Bug 1406781

Summary: Enabling Quota takes significantly long time and it has regressed
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nag Pavan Chilakam <nchilaka>
Component: quotaAssignee: hari gowtham <hgowtham>
Status: CLOSED WONTFIX QA Contact: Rahul Hinduja <rhinduja>
Severity: medium Docs Contact:
Priority: medium    
Version: rhgs-3.2CC: amukherj, rcyriac, rhinduja, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: Regression, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1409521 (view as bug list) Environment:
Last Closed: 2018-11-19 09:16:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1409521    

Description Nag Pavan Chilakam 2016-12-21 13:03:46 UTC
Whenever I enabled quota in 3.1.3 or prior to that , it used to get into effect immediately.
Ie the CLI used to respond in 1sec or less

However I saw that in 3.2 it takes about 25Sec
That is terribly slow and time taking.
I know that we are enabling even inode-quota, but still I don't understand the reason to take this much time

I compared this with same setup and same configuration on 3.1.3 and 3.2

Below are the finding


3.1.3(3.7.9-12)
=============
[root@dhcp35-37 ~]# time gluster v quota disperse enable
volume quota : success

real	0m1.435s
user	0m0.069s
sys	0m0.081s


3.2(3.8.4-9)
=========== 
[root@dhcp35-37 ~]# time gluster v quota disperse enable
volume quota : success

real	0m25.261s
user	0m0.003s
sys	0m0.026s

Comment 3 Atin Mukherjee 2016-12-21 13:09:39 UTC
Technically I don't see it as a blocker as its a management operation and there is no functional impact here.

Comment 4 Nag Pavan Chilakam 2016-12-26 10:19:49 UTC
it is a regression, would call it for blocker triage

Comment 7 Sanoj Unnikrishnan 2017-01-02 11:06:52 UTC
This is  a result of commit c2865e83d414e375443adac0791887c8adf444f2.
This commit makes the crawling process per brick to speed it up. 
These per-brick crawler are started serially in glusterd_quota_initiate_fs_crawl. So the time taken for quota enable will be linearly proportional to number of bricks in the volume.

Suggested Fix:
Currently a double fork (per brick) is done to prevent the process from blocking while collecting exit status from immediate child.
The waitpid calls can be done after all the crawlers are forked to reduce the time.


Time taken with this commit :
a) for volume with 3 bricks 

[root@rhs-cli-08 glusterfs]# gluster v create v1 10.8.152.8:/export/sdb/b1 10.8.152.8:/export/sdb/b2 10.8.152.8:/export/sdb/b3 force
volume create: v1: success: please start the volume to access data

[root@rhs-cli-08 glusterfs]# gluster v start v1
volume start: v1: success
[root@rhs-cli-08 glusterfs]# time gluster v quota v1 enable
volume quota : success
real	0m16.625s
user	0m0.089s
sys	0m0.018s

b) For volume with 15 bricks:

[root@rhs-cli-08 glusterfs]# gluster v create v2 10.8.152.8:/export/sdb/c1 10.8.152.8:/export/sdb/c2 10.8.152.8:/export/sdb/c3 10.8.152.8:/export/sdb/c4 10.8.152.8:/export/sdb/c5 10.8.152.8:/export/sdb/c6 10.8.152.8:/export/sdb/c7 10.8.152.8:/export/sdb/c8 10.8.152.8:/export/sdb/c9 10.8.152.8:/export/sdb/c10 10.8.152.8:/export/sdb/c11 10.8.152.8:/export/sdb/c12 10.8.152.8:/export/sdb/c13 10.8.152.8:/export/sdb/c14 10.8.152.8:/export/sdb/c15 10.8.152.8:/export/sdb/c16 force
volume create: v2: success: please start the volume to access data
[root@rhs-cli-08 glusterfs]# 
[root@rhs-cli-08 glusterfs]# gluster v start v2

[root@rhs-cli-08 glusterfs]# time gluster v quota v2 enable
volume quota : success

real	1m11.180s
user	0m0.087s
sys	0m0.025s


with previous commit it took about 8s for both volumes v1, v2

Comment 8 Atin Mukherjee 2017-02-20 13:40:48 UTC
upstream patch : https://review.gluster.org/16383

Comment 14 Sanoj Unnikrishnan 2017-09-26 10:33:01 UTC
Patch needs rework with different approach. Hence changing status

Comment 21 hari gowtham 2018-11-19 09:16:50 UTC
Hi,

I'm closing this bug as we are not actively working on Quota.

-Hari.