REVIEW: http://review.gluster.org/9374 (cluster/afr: serialize inode locks) posted (#1) for review on release-3.5 by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/9374 (cluster/afr: serialize inode locks) posted (#2) for review on release-3.5 by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/9374 (cluster/afr: serialize inode locks) posted (#3) for review on release-3.5 by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/9374 (cluster/afr: serialize inode locks) posted (#4) for review on release-3.5 by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/9374 committed in release-3.5 by Niels de Vos (ndevos) ------ commit bb8845d3bd94f94a1302bb50811be209a7253dcb Author: Pranith Kumar K <pkarampu> Date: Wed Dec 31 16:41:43 2014 +0530 cluster/afr: serialize inode locks Backport of http://review.gluster.com/9372 Problem: Afr winds inodelk calls without any order, so blocking inodelks from two different mounts can lead to dead lock when mount1 gets the lock on brick-1 and blocked on brick-2 where as mount2 gets lock on brick-2 and blocked on brick-1 Fix: Serialize the inodelks whether they are blocking inodelks or non-blocking inodelks. Non-blocking locks also need to be serialized. Otherwise there is a chance that both the mounts which issued same non-blocking inodelk may endup not acquiring the lock on any-brick. Ex: Mount1 and Mount2 request for full length lock on file f1. Mount1 afr may acquire the partial lock on brick-1 and may not acquire the lock on brick-2 because Mount2 already got the lock on brick-2, vice versa. Since both the mounts only got partial locks, afr treats them as failure in gaining the locks and unwinds with EAGAIN errno. Change-Id: I939a1d101e313a9f0abf212b94cdce1392611a5e BUG: 1177928 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/9374 Reviewed-by: Krutika Dhananjay <kdhananj> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Niels de Vos <ndevos>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.4, please reopen this bug report. glusterfs-3.5.4 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/2 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user