REVIEW: http://review.gluster.org/16413 (cluster/afr: Remove backward compatibility for locks with v1) posted (#1) for review on release-3.9 by Pranith Kumar Karampuri (email@example.com)
COMMIT: http://review.gluster.org/16413 committed in release-3.9 by Pranith Kumar Karampuri (firstname.lastname@example.org)
Author: Pranith Kumar K <email@example.com>
Date: Mon Dec 5 13:20:51 2016 +0530
cluster/afr: Remove backward compatibility for locks with v1
When we have cascading locks with same lk-owner there is a possibility for
a deadlock to happen. One example is as follows:
self-heal takes a lock in data-domain for big name with 256 chars of "aaaa...a"
and starts heal in a 3-way replication when brick-0 is offline and healing from
brick-1 to brick-2 is in progress. So this lock is active on brick-1 and
brick-2. Now brick-0 comes online and an operation wants to take full lock and
the lock is granted at brick-0 and it is waiting for lock on brick-1. As part
of entry healing it takes full locks on all the available bricks and then
proceeds with healing the entry. Now this lock will start waiting on brick-0
because some other operation already has a granted lock on it. This leads to a
deadlock. Operation is waiting for unlock on "aaaa..." by heal where as heal is
waiting for the operation to unlock on brick-0. Initially I thought this is
happening because healing is trying to take a lock on all the available bricks
instead of just the bricks that are participating in heal. But later realized
that same kind of deadlock can happen if a brick goes down after the heal
starts but comes back before it completes. So the essential problem is the
cascading locks with same lk-owner which were added for backward compatibility
with afr-v1 which can be safely removed now that versions with afr-v1 are
already EOL. This patch removes the compatibility with v1 which requires
cascading locks with same lk-owner.
In the next version we can make locking-scheme option a dummy and switch
completely to v2.
>Signed-off-by: Pranith Kumar K <firstname.lastname@example.org>
>Smoke: Gluster Build System <email@example.com>
>Reviewed-by: Ravishankar N <firstname.lastname@example.org>
>NetBSD-regression: NetBSD Build System <email@example.com>
>CentOS-regression: Gluster Build System <firstname.lastname@example.org>
Signed-off-by: Pranith Kumar K <email@example.com>
Smoke: Gluster Build System <firstname.lastname@example.org>
CentOS-regression: Gluster Build System <email@example.com>
NetBSD-regression: NetBSD Build System <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.1, please open a new bug report.
glusterfs-3.9.1 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.