Red Hat Bugzilla – Bug 1285660
sharding - reads fail on sharded volume while running iozone
Last modified: 2016-06-16 09:47:06 EDT
Description of problem:
While running iozone on a distributed-replicated volume with sharding enabled, reads start to fail with EBADFD at some point. The issue is not seen when md-cache is disabled. On loading trace above and below md-cache and rerunning the test, figured shard_fsync_cbk() is not returning the aggregated size of the file to the layers above - this md-cache would cache and serve incorrectly in subsequent operations to the application, leading to failure.
[root@dhcp35-215 ~]# gluster volume info
Volume Name: dis-rep
Volume ID: e2f66579-06c4-4e88-b825-003211f68d6b
Number of Bricks: 2 x 2 = 4
Version-Release number of selected component (if applicable):
Steps to Reproduce:
REVIEW: http://review.gluster.org/12759 (features/shard: Set ctime to 0 in fsync callback) posted (#1) for review on master by Krutika Dhananjay (email@example.com)
REVIEW: http://review.gluster.org/12759 (features/shard: Set ctime to 0 in fsync callback) posted (#2) for review on master by Krutika Dhananjay (firstname.lastname@example.org)
COMMIT: http://review.gluster.org/12759 committed in master by Pranith Kumar Karampuri (email@example.com)
Author: Krutika Dhananjay <firstname.lastname@example.org>
Date: Thu Nov 26 13:59:30 2015 +0530
features/shard: Set ctime to 0 in fsync callback
... to indicate to md-cache that it should not be caching
Signed-off-by: Krutika Dhananjay <email@example.com>
Tested-by: NetBSD Build System <firstname.lastname@example.org>
Tested-by: Gluster Build System <email@example.com>
Reviewed-by: Pranith Kumar Karampuri <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.
glusterfs-3.8.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.