Description of problem: Current use of a per-client mutex to protect fdctx introduces lock contentions when there are dozens of file operations active. Need to break the lock down and use finer grain spinlock to reduce contention. Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/18907 (protocol/client: reduce lock contention) posted (#1) for review on master by Zhang Huan
This issue focusing on client side lock contention when manipulating fd. It is a follow-up contention fix of the server side fdtable lookup (https://bugzilla.redhat.com/show_bug.cgi?id=1518582). These two patches are better to work together. Here are a few numbers of w/ and w/o the two patches. Tested by fio with 8 concurrency via fuse mount. Only one brick is configured and ramdisk is used as storage. client and server side event threads are all set to 4. seqread 1MB seqwrite 1MB randread 4KB randwrite 4KB w/o patch 1 883 505 13.5 22.3 w/o patch 2 954 503 14.3 25.1 w/ patch 1 973 526 14.2 25.4 w/ patch 2 974 531 14.4 25.4
COMMIT: https://review.gluster.org/18907 committed in master by \"Zhang Huan\" <zhanghuan> with a commit message- protocol/client: reduce lock contention Current use of a per-client mutex to protect fdctx introduces lock contentions when there are dozens of file operations active. Use finer grain spinlock to reduce contention, and put retrieving fdctx out of lock. Change-Id: Iea3e2eb481e76a5d73a582ba81529180c5b88248 BUG: 1519598 Signed-off-by: Zhang Huan <zhanghuan>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report. glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html [2] https://www.gluster.org/pipermail/gluster-users/