Bug 1412909 - [Arbiter] IO's Halted and heal info command hung
Summary: [Arbiter] IO's Halted and heal info command hung
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: arbiter
Version: 3.8
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On: 1398188 1401404 1413062
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-13 06:03 UTC by Pranith Kumar K
Modified: 2017-11-07 10:36 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1401404
Environment:
Last Closed: 2017-11-07 10:36:18 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Comment 1 Pranith Kumar K 2017-01-13 06:05:43 UTC
When we have cascading locks with same lk-owner there is a possibility for
    a deadlock to happen. One example is as follows:
    
    self-heal takes a lock in data-domain for big name with 256 chars of "aaaa...a"
    and starts heal in a 3-way replication when brick-0 is offline and healing from
    brick-1 to brick-2 is in progress. So this lock is active on brick-1 and
    brick-2. Now brick-0 comes online and an operation wants to take full lock and
    the lock is granted at brick-0 and it is waiting for lock on brick-1. As part
    of entry healing it takes full locks on all the available bricks and then
    proceeds with healing the entry. Now this lock will start waiting on brick-0
    because some other operation already has a granted lock on it. This leads to a
    deadlock. Operation is waiting for unlock on "aaaa..." by heal where as heal is
    waiting for the operation to unlock on brick-0. Initially I thought this is
    happening because healing is trying to take a lock on all the available bricks
    instead of just the bricks that are participating in heal. But later realized
    that same kind of deadlock can happen if a brick goes down after the heal
    starts but comes back before it completes. So the essential problem is the
    cascading locks with same lk-owner which were added for backward compatibility
    with afr-v1 which can be safely removed now that versions with afr-v1 are
    already EOL. This patch removes the compatibility with v1 which requires
    cascading locks with same lk-owner.

Comment 2 Worker Ant 2017-01-13 06:25:30 UTC
REVIEW: http://review.gluster.org/16389 (cluster/afr: Remove backward compatibility for locks with v1) posted (#1) for review on release-3.8 by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 3 Niels de Vos 2017-11-07 10:36:18 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.