+++ This bug was initially created as a clone of Bug #1637802 +++
Description of problem:
commit eb472d82a083883335bc494b87ea175ac43471ff in master introduced a bug where a data-self-heal on a file in arbiter leaves a stale inodelk behind on the bricks. Thus any new write to the file from a client can hang
Steps to Reproduce:
1. Create 1x (2+1) arbiter, fuse mount it and create a file
2. Kill arbiter brick, write to the file, bring back arbiter and let self-heal complete.
3. Next write to the file from mount will hang because the inodelk gets blocked because of the previous stale locks left behind from self-heal
Downstream bug which found the issue: BZ 1636902
--- Additional comment from Worker Ant on 2018-10-10 02:56:21 EDT ---
REVIEW: https://review.gluster.org/21380 (afr: prevent winding inodelks twice for arbiter volumes) posted (#1) for review on master by Ravishankar N
REVIEW: https://review.gluster.org/21386 (afr: prevent winding inodelks twice for arbiter volumes) posted (#1) for review on release-3.12 by Ravishankar N
COMMIT: https://review.gluster.org/21386 committed in release-3.12 by "jiffin tony Thottan" <firstname.lastname@example.org> with a commit message- afr: prevent winding inodelks twice for arbiter volumes
Backport of https://review.gluster.org/#/c/glusterfs/+/21380/
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff
Thanks to Pranith Kumar K <email@example.com> for finding the RCA.
Signed-off-by: Ravishankar N <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.15, please open a new bug report.
glusterfs-3.12.15 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.