+++ This bug was initially created as a clone of Bug #1335818 +++ Description of problem: Revert this patch for two reasons: 1) It introduces high fop latencies 2) Even with the patch, there is no true odirect behavior since the workaround in the patch doesn't reduce the caching done in kernel's page cache etc. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Vijay Bellur on 2016-05-13 05:54:26 EDT --- REVIEW: http://review.gluster.org/14328 (Revert "features/shard: Make o-direct writes work with sharding") posted (#1) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/14330 (Revert "features/shard: Make o-direct writes work with sharding") posted (#1) for review on release-3.8 by Krutika Dhananjay (kdhananj)
COMMIT: http://review.gluster.org/14330 committed in release-3.8 by Niels de Vos (ndevos) ------ commit c9016f1430701518725d8c20f8019bfba1644466 Author: Krutika Dhananjay <kdhananj> Date: Fri May 13 15:18:22 2016 +0530 Revert "features/shard: Make o-direct writes work with sharding" Backport of: http://review.gluster.org/#/c/14328/ This reverts commit c272c71391cea9db817f4e7e38cfc25a7cff8bd5. This is for two reasons: 1) It introduces high fop latencies 2) Even with the patch, there is no true odirect behavior since the workaround in the patch doesn't reduce the caching done in kernel's page cache as far as writes on anon fds associated with individual shards is concerned. Change-Id: I4137816a8bff9f0f77d42041a2d17e63dff82b5d BUG: 1335822 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/14330 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Smoke: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user