+++ This bug was initially created as a clone of Bug #1436739 +++ Description of problem: As per Sanjay Rao's inputs, there was a performance drop in random reads fio workload when run through vms hosted on sharded volumes. Volume profile indicated a big difference between the number of lookups sent by FUSE and number of lookups received by individual bricks. Through code reading, it was found that there is a performance bug in shard which was causing the translator to trigger unusually high number of lookups for cache invalidation even when there was no modification to the file. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2017-03-28 10:23:33 EDT --- REVIEW: https://review.gluster.org/16961 (features/shard: Pass the correct iatt for cache invalidation) posted (#1) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Worker Ant on 2017-03-30 01:48:38 EDT --- REVIEW: https://review.gluster.org/16961 (features/shard: Pass the correct iatt for cache invalidation) posted (#2) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: https://review.gluster.org/16968 (features/shard: Pass the correct iatt for cache invalidation) posted (#1) for review on release-3.8 by Krutika Dhananjay (kdhananj)
COMMIT: https://review.gluster.org/16968 committed in release-3.8 by jiffin tony Thottan (jthottan) ------ commit 7920d7f3879ca5971d4e7ba569534934bfa676e8 Author: Krutika Dhananjay <kdhananj> Date: Tue Mar 28 19:26:41 2017 +0530 features/shard: Pass the correct iatt for cache invalidation Backport of: https://review.gluster.org/16961 This fixes a performance issue with shard which was causing the translator to trigger unusually high number of lookups for cache invalidation even when there was no modification to the file. In shard_common_stat_cbk(), it is local->prebuf that contains the aggregated size and block count as opposed to buf which only holds the attributes for the physical copy of base shard. Passing buf for inode_ctx invalidation would always set refresh to true since the file size in inode ctx contains the aggregated size and would never be same as @buf->ia_size. This was leading to every write/read being preceded by a lookup on the base shard even when the file underwent no modification. Change-Id: I85940b4b33e77b98e97e277d880ab35b1496c89a BUG: 1437330 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: https://review.gluster.org/16968 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: jiffin tony Thottan <jthottan>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.11, please open a new bug report. glusterfs-3.8.11 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/packaging/2017-April/000289.html [2] https://www.gluster.org/pipermail/gluster-users/