Bug 1258905
Summary: | Sharding - read/write performance improvements for VM workload | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Krutika Dhananjay <kdhananj> | |
Component: | sharding | Assignee: | Krutika Dhananjay <kdhananj> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | bugs <bugs> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | mainline | CC: | bugs, pcuzner, sabose | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8rc2 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1261716 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-16 13:34:51 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1261716 |
Description
Krutika Dhananjay
2015-09-01 13:53:27 UTC
(In reply to Krutika Dhananjay from comment #0) > Description of problem: > > Paul Cuzner in his testing of sharding in hyperconvergence environment noted > a 3x write latency and 2x read latency. > One network operation that can be eliminated among the many things that > shard translator does in every WRITEV and READV fop is the LOOKUP that is > done on the zeroth shard to fetch the size and block_count xattr. Since VM > workload is a single-writer use-case, and the client that wrote to a file is > always the one that is going to read it, the size and block count xattr > could be cached (and kept up-to-date) in the inode ctx of the main file, > thereby saving the need for the (extra) LOOKUP every time. > Forgot to add that the credit for this idea above goes to Pranith Kumar K. -Krutika > The other place where network fops can be avoided is XATTROP, if/when there > is a WRITEV that does not change the file size && block count, > > Version-Release number of selected component (if applicable): > > > How reproducible: > > > Steps to Reproduce: > 1. > 2. > 3. > > Actual results: > > > Expected results: > > > Additional info: (In reply to Krutika Dhananjay from comment #0) > Description of problem: > > Paul Cuzner in his testing of sharding in hyperconvergence environment noted > a 3x write latency and 2x read latency. > One network operation that can be eliminated among the many things that > shard translator does in every WRITEV and READV fop is the LOOKUP that is > done on the zeroth shard to fetch the size and block_count xattr. Since VM > workload is a single-writer use-case, and the client that wrote to a file is > always the one that is going to read it, the size and block count xattr > could be cached (and kept up-to-date) in the inode ctx of the main file, > thereby saving the need for the (extra) LOOKUP every time. > > The other place where network fops can be avoided is XATTROP, if/when there > is a WRITEV that does not change the file size && block count, > > Version-Release number of selected component (if applicable): > > > How reproducible: > > > Steps to Reproduce: > 1. > 2. > 3. > > Actual results: > > > Expected results: > > > Additional info: This sounds great but I have to ask about shared vdisks - for example, RHEV supports vdisk sharing across vm's. Typically this would mean that the disk is only ever online to one vm at a time - but I wanted to make sure that this use case is considered. REVIEW: http://review.gluster.org/12126 (features/shard: Performance improvements in IO path) posted (#1) for review on master by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/12126 (features/shard: Performance improvements in IO path) posted (#2) for review on master by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/12138 (features/shard: Performance improvements in IO path - Part 2) posted (#1) for review on master by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/12138 (features/shard: Performance improvements in IO path - Part 2) posted (#2) for review on master by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/12126 (features/shard: Performance improvements in IO path) posted (#3) for review on master by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/12138 (features/shard: Performance improvements in IO path - Part 2) posted (#3) for review on master by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/12126 (features/shard: Performance improvements in IO path) posted (#4) for review on master by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/12138 (features/shard: Performance improvements in IO path - Part 2) posted (#4) for review on master by Krutika Dhananjay (kdhananj) REVIEW: http://review.gluster.org/12126 (features/shard: Performance improvements in IO path) posted (#5) for review on master by Krutika Dhananjay (kdhananj) Patches merged. This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |