Bug 1699731 - Fops hang when inodelk fails on the first fop
Summary: Fops hang when inodelk fails on the first fop
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1696599
Blocks: 1688395 1699736
TreeView+ depends on / blocked
 
Reported: 2019-04-15 06:10 UTC by Pranith Kumar K
Modified: 2019-04-22 13:33 UTC (History)
1 user (show)

Fixed In Version: glusterfs-6.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1696599
: 1699736 (view as bug list)
Environment:
Last Closed: 2019-04-16 11:29:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Gluster.org Gerrit 22565 None Merged cluster/afr: Remove local from owners_list on failure of lock-acquisition 2019-04-16 11:29:32 UTC

Description Pranith Kumar K 2019-04-15 06:10:56 UTC
+++ This bug was initially created as a clone of Bug #1696599 +++

Description of problem:
Steps:
glusterd

gluster peer probe localhost.localdomain
peer probe: success. Probe on localhost not needed
gluster --mode=script --wignore volume create r3 replica 3 localhost.localdomain:/home/gfs/r3_0 localhost.localdomain:/home/gfs/r3_1 localhost.localdomain:/home/gfs/r3_2
volume create: r3: success: please start the volume to access data
gluster --mode=script volume start r3
volume start: r3: success
mkdir: cannot create directory ‘/mnt/r3’: File exists

mount -t glusterfs localhost.localdomain:/r3 /mnt/r3

First terminal:
# cd /mnt/r3
# touch abc
Attach the mount process in gdb and put a break point on function afr_lock()
From second terminal:
# exec 200>abc
# echo abc >&200
# When the break point is hit, on third terminal execute "gluster volume stop r3"
# quit gdb
# execute "gluster volume start r3 force"
# On the first terminal execute "exec abc >&200" again and this command hangs.


Version-Release number of selected component (if applicable):


How reproducible:
Always

Actual results:


Expected results:


Additional info:

--- Additional comment from Worker Ant on 2019-04-05 08:37:54 UTC ---

REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) posted (#1) for review on master by Pranith Kumar Karampuri

--- Additional comment from Worker Ant on 2019-04-15 06:03:00 UTC ---

REVIEW: https://review.gluster.org/22515 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) merged (#6) on master by Pranith Kumar Karampuri

Comment 1 Worker Ant 2019-04-15 06:20:05 UTC
REVIEW: https://review.gluster.org/22565 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) posted (#1) for review on release-6 by Pranith Kumar Karampuri

Comment 2 Worker Ant 2019-04-16 11:29:33 UTC
REVIEW: https://review.gluster.org/22565 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) merged (#2) on release-6 by Shyamsundar Ranganathan

Comment 3 Shyamsundar 2019-04-22 13:33:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.1, please open a new bug report.

glusterfs-6.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-April/000124.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.