Whenever I enabled quota in 3.1.3 or prior to that , it used to get into effect immediately. Ie the CLI used to respond in 1sec or less However I saw that in 3.2 it takes about 25Sec That is terribly slow and time taking. I know that we are enabling even inode-quota, but still I don't understand the reason to take this much time I compared this with same setup and same configuration on 3.1.3 and 3.2 Below are the finding 3.1.3(3.7.9-12) ============= [root@dhcp35-37 ~]# time gluster v quota disperse enable volume quota : success real 0m1.435s user 0m0.069s sys 0m0.081s 3.2(3.8.4-9) =========== [root@dhcp35-37 ~]# time gluster v quota disperse enable volume quota : success real 0m25.261s user 0m0.003s sys 0m0.026s
Technically I don't see it as a blocker as its a management operation and there is no functional impact here.
it is a regression, would call it for blocker triage
This is a result of commit c2865e83d414e375443adac0791887c8adf444f2. This commit makes the crawling process per brick to speed it up. These per-brick crawler are started serially in glusterd_quota_initiate_fs_crawl. So the time taken for quota enable will be linearly proportional to number of bricks in the volume. Suggested Fix: Currently a double fork (per brick) is done to prevent the process from blocking while collecting exit status from immediate child. The waitpid calls can be done after all the crawlers are forked to reduce the time. Time taken with this commit : a) for volume with 3 bricks [root@rhs-cli-08 glusterfs]# gluster v create v1 10.8.152.8:/export/sdb/b1 10.8.152.8:/export/sdb/b2 10.8.152.8:/export/sdb/b3 force volume create: v1: success: please start the volume to access data [root@rhs-cli-08 glusterfs]# gluster v start v1 volume start: v1: success [root@rhs-cli-08 glusterfs]# time gluster v quota v1 enable volume quota : success real 0m16.625s user 0m0.089s sys 0m0.018s b) For volume with 15 bricks: [root@rhs-cli-08 glusterfs]# gluster v create v2 10.8.152.8:/export/sdb/c1 10.8.152.8:/export/sdb/c2 10.8.152.8:/export/sdb/c3 10.8.152.8:/export/sdb/c4 10.8.152.8:/export/sdb/c5 10.8.152.8:/export/sdb/c6 10.8.152.8:/export/sdb/c7 10.8.152.8:/export/sdb/c8 10.8.152.8:/export/sdb/c9 10.8.152.8:/export/sdb/c10 10.8.152.8:/export/sdb/c11 10.8.152.8:/export/sdb/c12 10.8.152.8:/export/sdb/c13 10.8.152.8:/export/sdb/c14 10.8.152.8:/export/sdb/c15 10.8.152.8:/export/sdb/c16 force volume create: v2: success: please start the volume to access data [root@rhs-cli-08 glusterfs]# [root@rhs-cli-08 glusterfs]# gluster v start v2 [root@rhs-cli-08 glusterfs]# time gluster v quota v2 enable volume quota : success real 1m11.180s user 0m0.087s sys 0m0.025s with previous commit it took about 8s for both volumes v1, v2
upstream patch : https://review.gluster.org/16383
Patch needs rework with different approach. Hence changing status
Hi, I'm closing this bug as we are not actively working on Quota. -Hari.