+++ This bug was initially created as a clone of Bug #1549497 +++ Description of problem: Bottleneck has been introduced due to upstream patch https://review.gluster.org/17105 To be more responsive to client pings in brick-mux mode, the IO/Management fops to the brick were separated from the ping responder code by designating a single rpcsvc request handler thread per gluster program. This created a bottleneck with multiple event threads queuing request to a queue that was being read by a single request handler thread that dispatched request to the IO thread pool. To alleviate this bottleneck it was necessary to scale rpcsvc request handler threads. As part of brick-mux implementation the event handler threads were scaled so that there was one event thread serving requests per brick. I'd like to propose patch https://review.gluster.org/19337 for RHGS 3.4.0 to alleviate the bottleneck of a single gluster program thread interfacing with the IO thread pool. Patch https://review.gluster.org/19337 attempts to continue work on brick-mux to facilitate request processing per gluster program by scaling the gluster program threads to match the event handler threads. The gluster program threads help the event handler threads to delegate the dispatch of the request to the IO handler threads much quicker so that the event handler threads can go back to reading RPC requests quicker than being busy with request hand-off at the brick.
REVIEW: https://review.gluster.org/19660 (rpcsvc: scale rpcsvc_request_handler threads) posted (#1) for review on release-4.0 by Milind Changire
COMMIT: https://review.gluster.org/19660 committed in release-4.0 by "Milind Changire" <mchangir> with a commit message- rpcsvc: scale rpcsvc_request_handler threads Scale rpcsvc_request_handler threads to match the scaling of event handler threads. Please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1467614#c51 for a discussion about why we need multi-threaded rpcsvc request handlers. mainline: > Reviewed-on: https://review.gluster.org/19337 > Reviewed-by: Raghavendra G <rgowdapp> > Signed-off-by: Milind Changire <mchangir> (cherry picked from commit 7d641313f46789ec0a7ba0cc04f504724c780855) Change-Id: Ib6838fb8b928e15602a3d36fd66b7ba08999430b BUG: 1550946 Signed-off-by: Milind Changire <mchangir>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.1, please open a new bug report. glusterfs-4.0.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-March/000093.html [2] https://www.gluster.org/pipermail/gluster-users/