Bug 1285762 - sharding - reads fail on sharded volume while running iozone
sharding - reads fail on sharded volume while running iozone
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: sharding (Show other bugs)
3.7.6
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Krutika Dhananjay
bugs@gluster.org
: Triaged
Depends On: 1285660
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-26 07:03 EST by Krutika Dhananjay
Modified: 2016-04-19 03:49 EDT (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.7.7
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1285660
Environment:
Last Closed: 2016-04-19 03:49:12 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Krutika Dhananjay 2015-11-26 07:03:25 EST
+++ This bug was initially created as a clone of Bug #1285660 +++

Description of problem:

While running iozone on a distributed-replicated volume with sharding enabled, reads start to fail with EBADFD at some point. The issue is not seen when md-cache is disabled. On loading trace above and below md-cache and rerunning the test, figured shard_fsync_cbk() is not returning the aggregated size of the file to the layers above - this md-cache would cache and serve incorrectly in subsequent operations to the application, leading to failure.

[root@dhcp35-215 ~]# gluster volume info
 
Volume Name: dis-rep
Type: Distributed-Replicate
Volume ID: e2f66579-06c4-4e88-b825-003211f68d6b
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: kdhananjay:/bricks/1
Brick2: kdhananjay:/bricks/2
Brick3: kdhananjay:/bricks/3
Brick4: kdhananjay:/bricks/4
Options Reconfigured:
performance.strict-write-ordering: on
features.shard: on
performance.readdir-ahead: on


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Vijay Bellur on 2015-11-26 03:31:51 EST ---

REVIEW: http://review.gluster.org/12759 (features/shard: Set ctime to 0 in fsync callback) posted (#1) for review on master by Krutika Dhananjay (kdhananj@redhat.com)
Comment 1 Vijay Bellur 2015-11-26 07:05:05 EST
REVIEW: http://review.gluster.org/12765 (features/shard: Set ctime to 0 in fsync callback) posted (#1) for review on release-3.7 by Krutika Dhananjay (kdhananj@redhat.com)
Comment 2 Vijay Bellur 2015-11-26 22:58:43 EST
REVIEW: http://review.gluster.org/12765 (features/shard: Set ctime to 0 in fsync callback) posted (#2) for review on release-3.7 by Krutika Dhananjay (kdhananj@redhat.com)
Comment 3 Vijay Bellur 2015-11-30 11:26:37 EST
COMMIT: http://review.gluster.org/12765 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit 21e1782d2b8adc5f668c10a52613e45b627e9bb2
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Thu Nov 26 13:59:30 2015 +0530

    features/shard: Set ctime to 0 in fsync callback
    
            Backport of: http://review.gluster.org/#/c/12759/
    
    ... to indicate to md-cache that it should not be caching
    file attributes.
    
    Change-Id: I95c94779caa26fe972aaccf6c4400278e2404267
    BUG: 1285762
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/12765
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
Comment 4 Kaushal 2016-04-19 03:49:12 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report.

glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.