+++ This bug was initially created as a clone of Bug #1733166 +++ Description of problem: While running parallel I/Os involving many files on nfs-ganesha mount, have hit below deadlock in the nfs-ganesha process. epoll thread: ....glfs_cbk_upcall_data->upcall_syncop_args_init->glfs_h_poll_cache_invalidation->glfs_h_find_handle->priv_glfs_active_subvol->glfs_lock (waiting on lock) I/O thread: ...glfs_h_stat->glfs_resolve_inode->__glfs_resolve_inode (at this point we acquired glfs_lock) -> ...->glfs_refresh_inode_safe->syncop_lookup To summarize- I/O thread which acquired glfs_lock are waiting for epoll threads to receive response where as epoll threads are waiting for I/O threads to release lock. Similar issue was identified earlier (bug1693575). There could be other issues at different layers depending on how client xlators choose to process these callbacks. The correct way of avoiding or fixing these issues is to re-design upcall model which is to use different sockets for callback communication instead of using same epoll threads. Raised github issue for that - https://github.com/gluster/glusterfs/issues/697 Since it may take a while, raising this BZ to provide a workaround fix in gfapi layer for now Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-07-25 10:09:58 UTC --- REVIEW: https://review.gluster.org/23107 (gfapi: Fix deadlock while processing upcall) posted (#1) for review on release-6 by soumya k --- Additional comment from Worker Ant on 2019-07-25 10:16:57 UTC --- REVIEW: https://review.gluster.org/23108 (gfapi: Fix deadlock while processing upcall) posted (#1) for review on master by soumya k
Verified this BZ with ]# rpm -qa | grep ganesha nfs-ganesha-2.7.3-9.el7rhgs.x86_64 glusterfs-ganesha-6.0-17.el7rhgs.x86_64 nfs-ganesha-gluster-2.7.3-9.el7rhgs.x86_64 nfs-ganesha-debuginfo-2.7.3-9.el7rhgs.x86_64 Moving this BZ to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3249