Description of problem: As per Sanjay Rao's inputs, there was a performance drop in random reads fio workload when run through vms hosted on sharded volumes. Volume profile indicated a big difference between the number of lookups sent by FUSE and number of lookups received by individual bricks. Through code reading, it was found that there is a performance bug in shard which was causing the translator to trigger unusually high number of lookups for cache invalidation even when there was no modification to the file. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/16961 (features/shard: Pass the correct iatt for cache invalidation) posted (#1) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: https://review.gluster.org/16961 (features/shard: Pass the correct iatt for cache invalidation) posted (#2) for review on master by Krutika Dhananjay (kdhananj)
COMMIT: https://review.gluster.org/16961 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit dd5ada1f11d76b4c55c7c55d23718617f11a6c12 Author: Krutika Dhananjay <kdhananj> Date: Tue Mar 28 19:26:41 2017 +0530 features/shard: Pass the correct iatt for cache invalidation This fixes a performance issue with shard which was causing the translator to trigger unusually high number of lookups for cache invalidation even when there was no modification to the file. In shard_common_stat_cbk(), it is local->prebuf that contains the aggregated size and block count as opposed to buf which only holds the attributes for the physical copy of base shard. Passing buf for inode_ctx invalidation would always set refresh to true since the file size in inode ctx contains the aggregated size and would never be same as @buf->ia_size. This was leading to every write/read being preceded by a lookup on the base shard even when the file underwent no modification. Change-Id: Ib0349291d2d01f3782d6d0bdd90c6db5e0609210 BUG: 1436739 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: https://review.gluster.org/16961 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/