+++ This bug was initially created as a clone of Bug #1242421 +++ Description of problem: glusterd uses single epoll worker thread for sending/receiving notifications. gluster commands which communicate with many daemons, e.g snapshot-create could take longer than ping-timeout secs on a volume with large no. of snapshots times out. With introduction of multi-threaded epoll support in glusterd, we could add more epoll workers to (reduce latency and) avoid spurious disconnects due to ping-timeout. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Anand Avati on 2015-07-13 06:04:35 EDT --- REVIEW: http://review.gluster.org/11630 (glusterd: use 2 epoll worker threads by default) posted (#2) for review on master by Krishnan Parthasarathi (kparthas)
REVIEW: http://review.gluster.org/11991 (event: add dispatched flag to know if event_dispatch was called) posted (#2) for review on release-3.7 by Krishnan Parthasarathi (kparthas)
http://review.gluster.org/11630 introduced multi-threaded epoll for glusterd on Linux systems. This patch creates incorrect no. of worker threads than the default. This is because, multi-threaded epoll subsystem doesn't allow reconfiguring of no. of threads before event_dispatch is called. Fix in comment#1 fixes that. tl;dr With http://review.gluster.org/11630 and http://review.gluster.org/11991 glusterd is capable of listening for epoll(7) events from 2 threads by default.
http://review.gluster.org/11991 doesn't fix the problem mentioned in comment#2. It has been abandoned in favour of http://review.gluster.org/12004.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.4, please open a new bug report. glusterfs-3.7.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12496 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user