Bug 1635975 - Writes taking very long time leading to system hogging
Summary: Writes taking very long time leading to system hogging
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 5
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
Depends On: 1591208 1625961 1635979
Blocks: 1635977
TreeView+ depends on / blocked
Reported: 2018-10-04 06:51 UTC by Pranith Kumar K
Modified: 2018-10-23 15:19 UTC (History)
6 users (show)

Fixed In Version: glusterfs-5.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1625961
: 1635977 (view as bug list)
Last Closed: 2018-10-23 15:19:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Comment 1 Worker Ant 2018-10-04 07:09:28 UTC
REVIEW: https://review.gluster.org/21337 (cluster/afr: Batch writes in same lock even when multiple fds are open) posted (#1) for review on release-5 by Pranith Kumar Karampuri

Comment 2 Worker Ant 2018-10-05 14:37:31 UTC
COMMIT: https://review.gluster.org/21337 committed in release-5 by "Shyamsundar Ranganathan" <srangana@redhat.com> with a commit message- cluster/afr: Batch writes in same lock even when multiple fds are open

When eager-lock is disabled because of multiple-fds opened and app
writes come on conflicting regions, the number of locks grows very
fast leading to all the CPU being spent just in locking and unlocking
by traversing huge queues in locks xlator for granting locks.

Reduce the number of locks in transit by bundling the writes in the
same lock and disable delayed piggy-pack when we learn that multiple
fds are open on the file. This will reduce the size of queues in the
locks xlator.  This also reduces the number of network calls like

Please note that this problem can still happen if eager-lock is
disabled as the writes will not be bundled in the same lock.

fixes bz#1635975
Change-Id: I8fd1cf229aed54ce5abd4e6226351a039924dd91
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>

Comment 3 Shyamsundar 2018-10-23 15:19:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.