Description of problem: Found the return value of socket pollin and pollout for rpc message is not correctly handled. One major problem is socket EAGAIN error will be returned all the way back to dispatch handler and confuse user with error message like: [2018-12-29 07:31:41.772310] E [MSGID: 101191] [event-epoll.c:674:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/22043 (socket: fix counting of socket total_bytes_read and total_bytes_write) posted (#1) for review on master by Zhang Huan
REVIEW: https://review.gluster.org/22044 (socket: fix issue when socket write return with EAGAIN) posted (#1) for review on master by Zhang Huan
REVIEW: https://review.gluster.org/22046 (socket: don't pass return value from protocol handler to event handler) posted (#1) for review on master by Zhang Huan
REVIEW: https://review.gluster.org/22045 (socket: fix issue when socket read return with EAGAIN) posted (#1) for review on master by Zhang Huan
REVIEW: https://review.gluster.org/22043 (socket: fix counting of socket total_bytes_read and total_bytes_write) merged (#2) on master by Raghavendra G
REVIEW: https://review.gluster.org/22044 (socket: fix issue when socket write return with EAGAIN) merged (#2) on master by Raghavendra G
REVIEW: https://review.gluster.org/22045 (socket: fix issue when socket read return with EAGAIN) merged (#2) on master by Amar Tumballi
REVIEW: https://review.gluster.org/22046 (socket: don't pass return value from protocol handler to event handler) merged (#3) on master by Amar Tumballi
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/