Bug 1499644
Summary: | Eager lock should be present for both metadata and data transactions | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Pranith Kumar K <pkarampu> | |
Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> | |
Status: | CLOSED ERRATA | QA Contact: | Vijay Avuthu <vavuthu> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.3 | CC: | pkarampu, ravishankar, rhinduja, rhs-bugs, shberry, sheggodu, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.4.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.12.2-6 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1549606 (view as bug list) | Environment: | ||
Last Closed: | 2018-09-04 06:36:24 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1480188, 1491785, 1503134, 1528566, 1549606, 1583733 |
Description
Pranith Kumar K
2017-10-09 07:43:22 UTC
Assigning high priority since the bug is targeted for 3.4.0 https://code.engineering.redhat.com/gerrit/133659 storage/posix: Add active-fd-count option in gluster https://code.engineering.redhat.com/gerrit/133660 cluster/afr: Switch to active-fd-count for open-fd checks https://code.engineering.redhat.com/gerrit/131944 cluster/afr: Remove unused code paths https://code.engineering.redhat.com/gerrit/131945 cluster/afr: Make AFR eager-locking similar to EC Update: ======== Build Used: glusterfs-3.12.2-16.el7rhgs.x86_64 > Discussed with Pranith on validation steps. Scenario: 1) create 1 * 3 volume and start 2) set cluster.post-op-delay-secs to 100 3) enable shard feature 4) set shard-block-size to 32MB 5) enable volume profile and clear the stat info 6) write 1GB file from mount point 7) check the FINODELK count Same scenario is executed for 2 * 3 volume as well. Below are results when comparing with old build ( 3.8.4-54 ) NEW CLIENT OLD CLIENT FXATTROP 8147 8541 INODELK 28 FINODELK 47 25910 WRITE 8192 8192 > From the above results, there is a huge decrease in FINODELK ( from ~25K to 47 ) in current build. Moving status to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607 |