+++ This bug was initially created as a clone of Bug #1693575 +++
Description of problem:
With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to offload processing upcall notifications to synctask so as not to block epoll threads. However seems like the purpose wasnt fully resolved.
In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there is no callback defined, the thread waits on synctask_join till the syncfn is finished. So that way even with those changes, epoll threads are blocked till the upcalls are processed.
Hence the right fix now is to define a callback function for that synctask "glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and the upcall processing can happen in parallel by synctask threads.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
--- Additional comment from Soumya Koduri on 2019-03-28 09:28:58 UTC ---
Users have complained about nfs-ganesha process getting stuck here - https://github.com/nfs-ganesha/nfs-ganesha/issues/335
--- Additional comment from Worker Ant on 2019-03-28 09:34:11 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on master by soumya k
--- Additional comment from Worker Ant on 2019-03-29 07:25:10 UTC ---
REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) merged (#4) on master by Amar Tumballi
REVIEW: https://review.gluster.org/22460 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on release-5 by soumya k
REVIEW: https://review.gluster.org/22460 (gfapi: Unblock epoll thread for upcall processing) merged (#1) on release-5 by soumya k
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.6, please open a new bug report.
glusterfs-5.6 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.