Bug 1638159 - data-self-heal in arbiter volume results in stale locks.
Summary: data-self-heal in arbiter volume results in stale locks.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1637802
Blocks: 1636902 1637953 1637989 1638026
TreeView+ depends on / blocked
 
Reported: 2018-10-11 00:53 UTC by Ravishankar N
Modified: 2018-10-23 15:20 UTC (History)
1 user (show)

Fixed In Version: glusterfs-5.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1637802
Environment:
Last Closed: 2018-10-23 15:20:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2018-10-11 00:53:18 UTC
+++ This bug was initially created as a clone of Bug #1637802 +++

Description of problem:
commit eb472d82a083883335bc494b87ea175ac43471ff in master introduced a bug where a data-self-heal on a file in arbiter leaves a stale inodelk behind on the bricks. Thus any new write to the file from a client can hang

How reproducible:
Always.

Steps to Reproduce:
1. Create 1x (2+1) arbiter, fuse mount it and create a file
2. Kill arbiter brick, write to the file, bring back arbiter and let self-heal complete.
3. Next write to the file from mount will hang because the inodelk gets blocked because of the previous stale locks left behind from self-heal


Additional info:
Downstream bug which found the issue: BZ 1636902

--- Additional comment from Worker Ant on 2018-10-10 02:56:21 EDT ---

REVIEW: https://review.gluster.org/21380 (afr: prevent winding inodelks twice for arbiter volumes) posted (#1) for review on master by Ravishankar N

--- Additional comment from Worker Ant on 2018-10-10 12:19:31 EDT ---

COMMIT: https://review.gluster.org/21380 committed in master by "Amar Tumballi" <amarts> with a commit message- afr: prevent winding inodelks twice for arbiter volumes

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K <pkarampu> for finding the RCA.

fixes: bz#1637802
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N <ravishankar>

Comment 1 Worker Ant 2018-10-11 01:04:01 UTC
REVIEW: https://review.gluster.org/21387 (afr: prevent winding inodelks twice for arbiter volumes) posted (#1) for review on release-5 by Ravishankar N

Comment 2 Worker Ant 2018-10-11 10:57:14 UTC
COMMIT: https://review.gluster.org/21387 committed in release-5 by "Shyamsundar Ranganathan" <srangana> with a commit message- afr: prevent winding inodelks twice for arbiter volumes

Backport of https://review.gluster.org/#/c/glusterfs/+/21380/

Problem:
In an arbiter volume, if there is a pending data heal of a file only on
arbiter brick, self-heal takes inodelks twice due to a code-bug but unlocks
it only once, leaving behind a stale lock on the brick. This causes
the next write to the file to hang.

Fix:
Fix the code-bug to take lock only once. This bug was introduced master
with commit eb472d82a083883335bc494b87ea175ac43471ff

Thanks to  Pranith Kumar K <pkarampu> for finding the RCA.

fixes: bz#1638159
Change-Id: I15ad969e10a6a3c4bd255e2948b6be6dcddc61e1
Signed-off-by: Ravishankar N <ravishankar>

Comment 3 Shyamsundar 2018-10-23 15:20:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.