+++ This bug was initially created as a clone of Bug #1696599 +++ Description of problem: Steps: glusterd gluster peer probe localhost.localdomain peer probe: success. Probe on localhost not needed gluster --mode=script --wignore volume create r3 replica 3 localhost.localdomain:/home/gfs/r3_0 localhost.localdomain:/home/gfs/r3_1 localhost.localdomain:/home/gfs/r3_2 volume create: r3: success: please start the volume to access data gluster --mode=script volume start r3 volume start: r3: success mkdir: cannot create directory ‘/mnt/r3’: File exists mount -t glusterfs localhost.localdomain:/r3 /mnt/r3 First terminal: # cd /mnt/r3 # touch abc Attach the mount process in gdb and put a break point on function afr_lock() From second terminal: # exec 200>abc # echo abc >&200 # When the break point is hit, on third terminal execute "gluster volume stop r3" # quit gdb # execute "gluster volume start r3 force" # On the first terminal execute "exec abc >&200" again and this command hangs. Version-Release number of selected component (if applicable): How reproducible: Always Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-04-05 08:37:54 UTC --- REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) posted (#1) for review on master by Pranith Kumar Karampuri --- Additional comment from Worker Ant on 2019-04-15 06:03:00 UTC --- REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) merged (#6) on master by Pranith Kumar Karampuri
REVIEW: https://review.gluster.org/22565 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) posted (#1) for review on release-6 by Pranith Kumar Karampuri
REVIEW: https://review.gluster.org/22565 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) merged (#2) on release-6 by Shyamsundar Ranganathan
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report. glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html [2] https://www.gluster.org/pipermail/gluster-users/