Description of problem: While running iozone on a distributed-replicated volume with sharding enabled, reads start to fail with EBADFD at some point. The issue is not seen when md-cache is disabled. On loading trace above and below md-cache and rerunning the test, figured shard_fsync_cbk() is not returning the aggregated size of the file to the layers above - this md-cache would cache and serve incorrectly in subsequent operations to the application, leading to failure. [root@dhcp35-215 ~]# gluster volume info Volume Name: dis-rep Type: Distributed-Replicate Volume ID: e2f66579-06c4-4e88-b825-003211f68d6b Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: kdhananjay:/bricks/1 Brick2: kdhananjay:/bricks/2 Brick3: kdhananjay:/bricks/3 Brick4: kdhananjay:/bricks/4 Options Reconfigured: performance.strict-write-ordering: on features.shard: on performance.readdir-ahead: on Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/12759 (features/shard: Set ctime to 0 in fsync callback) posted (#1) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/12759 (features/shard: Set ctime to 0 in fsync callback) posted (#2) for review on master by Krutika Dhananjay (kdhananj)
COMMIT: http://review.gluster.org/12759 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 6c95d17dabf979c51905956409560b6bbdae0eb7 Author: Krutika Dhananjay <kdhananj> Date: Thu Nov 26 13:59:30 2015 +0530 features/shard: Set ctime to 0 in fsync callback ... to indicate to md-cache that it should not be caching file attributes. Change-Id: Iaef9bf7fec8008ca47d682b4b15984f26421bcd6 BUG: 1285660 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/12759 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user