Bug 1406781
Summary: | Enabling Quota takes significantly long time and it has regressed | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> | |
Component: | quota | Assignee: | hari gowtham <hgowtham> | |
Status: | CLOSED WONTFIX | QA Contact: | Rahul Hinduja <rhinduja> | |
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | rhgs-3.2 | CC: | amukherj, rcyriac, rhinduja, rhs-bugs, storage-qa-internal | |
Target Milestone: | --- | Keywords: | Regression, ZStream | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1409521 (view as bug list) | Environment: | ||
Last Closed: | 2018-11-19 09:16:50 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1409521 |
Description
Nag Pavan Chilakam
2016-12-21 13:03:46 UTC
Technically I don't see it as a blocker as its a management operation and there is no functional impact here. it is a regression, would call it for blocker triage This is a result of commit c2865e83d414e375443adac0791887c8adf444f2. This commit makes the crawling process per brick to speed it up. These per-brick crawler are started serially in glusterd_quota_initiate_fs_crawl. So the time taken for quota enable will be linearly proportional to number of bricks in the volume. Suggested Fix: Currently a double fork (per brick) is done to prevent the process from blocking while collecting exit status from immediate child. The waitpid calls can be done after all the crawlers are forked to reduce the time. Time taken with this commit : a) for volume with 3 bricks [root@rhs-cli-08 glusterfs]# gluster v create v1 10.8.152.8:/export/sdb/b1 10.8.152.8:/export/sdb/b2 10.8.152.8:/export/sdb/b3 force volume create: v1: success: please start the volume to access data [root@rhs-cli-08 glusterfs]# gluster v start v1 volume start: v1: success [root@rhs-cli-08 glusterfs]# time gluster v quota v1 enable volume quota : success real 0m16.625s user 0m0.089s sys 0m0.018s b) For volume with 15 bricks: [root@rhs-cli-08 glusterfs]# gluster v create v2 10.8.152.8:/export/sdb/c1 10.8.152.8:/export/sdb/c2 10.8.152.8:/export/sdb/c3 10.8.152.8:/export/sdb/c4 10.8.152.8:/export/sdb/c5 10.8.152.8:/export/sdb/c6 10.8.152.8:/export/sdb/c7 10.8.152.8:/export/sdb/c8 10.8.152.8:/export/sdb/c9 10.8.152.8:/export/sdb/c10 10.8.152.8:/export/sdb/c11 10.8.152.8:/export/sdb/c12 10.8.152.8:/export/sdb/c13 10.8.152.8:/export/sdb/c14 10.8.152.8:/export/sdb/c15 10.8.152.8:/export/sdb/c16 force volume create: v2: success: please start the volume to access data [root@rhs-cli-08 glusterfs]# [root@rhs-cli-08 glusterfs]# gluster v start v2 [root@rhs-cli-08 glusterfs]# time gluster v quota v2 enable volume quota : success real 1m11.180s user 0m0.087s sys 0m0.025s with previous commit it took about 8s for both volumes v1, v2 upstream patch : https://review.gluster.org/16383 Patch needs rework with different approach. Hence changing status Hi, I'm closing this bug as we are not actively working on Quota. -Hari. |