Upstream Patch: https://review.gluster.org/#/c/glusterfs/+/21579/
*** Bug 1647315 has been marked as a duplicate of this bug. ***
> 2) The patch that is going in is to address a bug that we might hit sometime > in future, when the (related) code is supported in downstream.. is that > correct? Also note that, we may hit this bug if below option is enabled by user. > Option: cluster.lock-migration > Default Value: off > Description: If enabled this feature will migrate the posix locks associated with a file during rebalance as it is a very visible coverity reported bug, and if in any case if the code-path is hit, it gets to 'hang', recommended to have it as part of BU2. May be running coverity on downstream build and seeing it doesn't report this is good enough fix? ----- ** CID 1396581: Program hangs (LOCK) /xlators/features/locks/src/posix.c: 2952 in pl_metalk() *** CID 1396581: Program hangs (LOCK) /xlators/features/locks/src/posix.c: 2952 in pl_metalk() 2946 gf_msg(this->name, GF_LOG_WARNING, EINVAL, 0, 2947 "More than one meta-lock can not be granted on" 2948 "the inode"); 2949 ret = -1; 2950 } 2951 } >>> CID 1396581: Program hangs (LOCK) >>> "pthread_mutex_lock" locks "pl_inode->mutex" while it is locked. 2952 pthread_mutex_lock(&pl_inode->mutex); 2953 2954 if (ret == -1) { 2955 goto out; 2956 } 2957
Thank you Susant for the detailed explanation. Based on Comment21 moving this BZ to Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3827