Bug 1557932 - Shard replicate volumes don't use eager-lock affectively
Summary: Shard replicate volumes don't use eager-lock affectively
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-19 09:52 UTC by Pranith Kumar K
Modified: 2018-06-20 18:02 UTC (History)
2 users (show)

Fixed In Version: glusterfs-v4.1.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-20 18:02:26 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pranith Kumar K 2018-03-19 09:52:21 UTC
Description of problem:
    Problem:
    when dd happens on sharded replicate volume all the writes on shards happen
    through anon-fd. When the writes don't come quick enough, old anon-fd closes
    and new fd gets created to serve the new writes. open-fd-count is decremented
    only after the fd is closed as part of fd_destroy(). So even when one fd is on
    the way to be closed a new fd will be created and during this short period it
    appears as though there are multiple fds opened on the file. AFR thinks another
    application opened the same file and switches off eager-lock leading to
    extra latency.
    
    Fix:
    Have a different option called active-fd whose life cycle starts at
    fd_bind() and ends just before fd_destroy()


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2018-03-19 10:25:05 UTC
REVIEW: https://review.gluster.org/19740 (storage/posix: Add active-fd-count option in gluster) posted (#1) for review on master by Pranith Kumar Karampuri

Comment 2 Worker Ant 2018-03-19 10:26:00 UTC
REVIEW: https://review.gluster.org/19741 (cluster/afr: Switch to active-fd-count for open-fd checks) posted (#1) for review on master by Pranith Kumar Karampuri

Comment 3 Worker Ant 2018-03-21 08:40:20 UTC
COMMIT: https://review.gluster.org/19740 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- storage/posix: Add active-fd-count option in gluster

Problem:
when dd happens on sharded replicate volume all the writes on shards happen
through anon-fd. When the writes don't come quick enough, old anon-fd closes
and new fd gets created to serve the new writes. open-fd-count is decremented
only after the fd is closed as part of fd_destroy(). So even when one fd is on
the way to be closed a new fd will be created and during this short period it
appears as though there are multiple fds opened on the file. AFR thinks another
application opened the same file and switches off eager-lock leading to
extra latency.

Fix:
Have a different option called active-fd whose life cycle starts at
fd_bind() and ends just before fd_destroy()

BUG: 1557932
Change-Id: I2e221f6030feeedf29fbb3bd6554673b8a5b9c94
Signed-off-by: Pranith Kumar K <pkarampu>

Comment 4 Worker Ant 2018-03-21 08:43:06 UTC
COMMIT: https://review.gluster.org/19741 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- cluster/afr: Switch to active-fd-count for open-fd checks

BUG: 1557932
Change-Id: I3783e41b3812267bc10c0d05d062a31396ce135b
Signed-off-by: Pranith Kumar K <pkarampu>

Comment 5 Shyamsundar 2018-06-20 18:02:26 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.