Bug 1384906
| Summary: | arbiter volume write performance is bad with sharding | |||
|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Ravishankar N <ravishankar> | |
| Component: | arbiter | Assignee: | Ravishankar N <ravishankar> | |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
| Severity: | unspecified | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | mainline | CC: | bugs, maorong.hu | |
| Target Milestone: | --- | Keywords: | Triaged | |
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.10.0 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1385224 1385226 (view as bug list) | Environment: | ||
| Last Closed: | 2017-03-06 17:29:11 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1375125, 1385224, 1385226 | |||
|
Description
Ravishankar N
2016-10-14 10:57:52 UTC
REVIEW: http://review.gluster.org/15641 (afr: Take full locks in arbiter only for data transactions) posted (#1) for review on master by Ravishankar N (ravishankar) COMMIT: http://review.gluster.org/15641 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 3a97486d7f9d0db51abcb13dcd3bc9db935e3a60 Author: Ravishankar N <ravishankar> Date: Fri Oct 14 16:09:08 2016 +0530 afr: Take full locks in arbiter only for data transactions Problem: Sharding exposed a bug in arbiter config. where `dd` throughput was extremely slow. Shard xlator was sending a fxattrop to update the file size immediately after a writev. Arbiter was incorrectly over-riding the LLONGMAX-1 start offset (for metadata domain locks) for this fxattrop, causing the inodelk to be taken on the data domain. And since the preceeding writev hadn't released the lock (afr does a 'lazy' unlock if write succeeds on all bricks), this degraded to a blocking lock causing extra lock/unlock calls and delays. Fix: Modify flock.l_len and flock.l_start to take full locks only for data transactions. Change-Id: I906895da2f2d16813607e6c906cb4defb21d7c3b BUG: 1384906 Signed-off-by: Ravishankar N <ravishankar> Reported-by: Max Raba <max.raba> Reviewed-on: http://review.gluster.org/15641 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> *** Bug 1388837 has been marked as a duplicate of this bug. *** This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/ |