Bug 1519598 - Reduce lock contention on protocol client manipulating fd
Summary: Reduce lock contention on protocol client manipulating fd
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: protocol
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-01 01:30 UTC by Zhang Huan
Modified: 2018-03-15 11:22 UTC (History)
1 user (show)

Fixed In Version: glusterfs-4.0.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-03-15 11:22:12 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1518582 0 unspecified CLOSED Reduce lock contention on fdtable lookup 2021-02-22 00:41:40 UTC

Internal Links: 1518582

Description Zhang Huan 2017-12-01 01:30:23 UTC
Description of problem:

Current use of a per-client mutex to protect fdctx introduces lock contentions when there are dozens of file operations active.

Need to break the lock down and use finer grain spinlock to reduce contention.

Version-Release number of selected component (if applicable):
mainline

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2017-12-01 01:34:41 UTC
REVIEW: https://review.gluster.org/18907 (protocol/client: reduce lock contention) posted (#1) for review on master by Zhang Huan

Comment 2 Zhang Huan 2017-12-01 02:02:40 UTC
This issue focusing on client side lock contention when manipulating fd. It is a follow-up contention fix of the server side fdtable lookup (https://bugzilla.redhat.com/show_bug.cgi?id=1518582). 

These two patches are better to work together. Here are a few numbers of w/ and w/o the two patches. Tested by fio with 8 concurrency via fuse mount. Only one brick is configured and ramdisk is used as storage. client and server side event threads are all set to 4.
	          seqread 1MB	seqwrite 1MB	randread 4KB	randwrite 4KB
w/o patch 1	       883	     505	  13.5	         22.3
w/o patch 2	       954	     503	  14.3	         25.1
w/ patch 1	       973	     526	  14.2	         25.4
w/ patch 2	       974	     531	  14.4	         25.4

Comment 3 Worker Ant 2017-12-26 05:06:27 UTC
COMMIT: https://review.gluster.org/18907 committed in master by \"Zhang Huan\" <zhanghuan> with a commit message- protocol/client: reduce lock contention

Current use of a per-client mutex to protect fdctx introduces lock
contentions when there are dozens of file operations active.

Use finer grain spinlock to reduce contention, and put retrieving
fdctx out of lock.

Change-Id: Iea3e2eb481e76a5d73a582ba81529180c5b88248
BUG: 1519598
Signed-off-by: Zhang Huan <zhanghuan>

Comment 4 Shyamsundar 2018-03-15 11:22:12 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.